January 6, 2015
NOTE: This article may be reproduced and circulated freely, as long as the copyright credit is included.
The late 1950s and early 1960s saw a rapid proliferation of new programming languages. The world had embraced the stored-program concept and was eagerly looking for breakthroughs in efficient and reliable software development tools.
At the same time, the computer industry was inventing new machine architectures with their proprietary instruction sets. Every manufacturer's machines were incompatible with other manufacturer's machines, and just within the IBM line there was little or no compatibility in either instruction sets or data formats among the 650, the 704, the 705, the 1401, the 1620, and the 7030, and their descendants.
With more than two dozen potentially useful programming languages and over a dozen machine architectures, both apparently growing without limit, the computing world faced a massive problem: Developing compilers for all of those languages to produce object code for all of those machines. For 24 languages on 12 machine types we'd need at least 288 compilers!
Possible relief from that burden was seen in the concept of a UNiversal Computer Oriented Language, an intermediate form. Each new programming language would need a compiler to generate UNCOL, and each new machine would need an UNCOL compiler to generate machine language. So for our 24 languages on 12 machines we'd need 36 processors instead of 288, and the advantage would become even more dramatic with the expected continued proliferation of new technologies.
My friend and respected colleague, Tom Steel, worked to popularize this concept in the literature and in the SHARE organization. But UNCOL1 never flourished for three reasons:
Java originated as a simple programming language for embedded hardware. It became easy to implement through Internet browsers, where users could invoke small programs ("applets") on their own computers of various types as well as larger programs on host server configurations. Compilers generated an intermediate language, Java bytecode, which could then be interpreted by a Java virtual machine (JVM) running on a specific kind of physical machine. Later improvements supported some bona fide compilation of Java bytecode into efficient executable code for a physical machine.
That was pretty much the UNCOL concept, but this time it succeeded, mainly because users were demanding interoperability over the Internet. Once Java bytecode became widely, almost universally available, it was a tempting target for other programming languages. The list of programming languages for which compilers generate Java bytecode (instead of native computer instructions) continues to grow. For example, I'm about to present a course in which the students will use Clojure2 on their own computers.
UNCOL may have died, but its concept lives on.
Last modified 8 January, 2014
Return to IDI home page