Comforting it must feel that we live in an age where we can program in the languages we do. A glance at the history of programming languages shows a nice trend. Programming languages are getting “higher level”.
It is then appealing to assign this increase in sophistication with an increased education in society or the coming of better software language design.
One remarkable smoking gun is the invention of the language lisp. Lisp is a language much higher level than most of the languages used today and one who hasn’t looked it up may be interested to know that Lisp was invented in 1958. This is 25 years before even C++. A glance on the tiobe index shows other revealing facts. C is the most popular language. Yes that’s right, C. C from 1972, which is also interestingly, 14 years after lisp. This begs the question. What is really spurring this increase in high level languages?
My analysis has come to the conclusion.
This, I will try and justify using the languages used today and why they came about when they did.
1. Binary Op Codes. People coding using ones and zeros. There was a specific code for each instruction say, move memory or add numbers. The problem was these codes would not work between machines. Thus a new language was born so that code would require less rewriting to move between machines of a similar architecture. ASM.
2. ASM. While most machines had similar instruction types. Even a small difference in architecture, like the number of registers could render the task of porting a program cumbersome. A language was needed that didn’t care how many registers you had. That didn’t care how high level your OP codes were. A language that professed a standard in byte length that is (8 bits to a byte).
This language was called C.
Thus, an operating system could be created that would truly run on different architectures. You might think that according to my theory that an ASM could simply be restricted to very low specs. For example only supporting the most common operations like INC and DEC operation, and only supporting two registers. That solution however is in a different domain. Newer architectures were being released that provided increased hardware capacity. People wanted to write software that would still work on these new platforms at a good speed. As you can see, it looks like I’m stretching the definition of portability to include hardware support. When you drop support for the knowledge of the number of registers, for greater hardware portability, it doesn’t matter what register you would have liked it to go in. This increase in the high level now comes directly from the dropping of hardware specifics like that in the progression from binary to ASM.
3. C++. When a programmer hears C++, they usually think of the pillars object oriented design.(Ignore these terms you don’t know them already) Encapsulation, Polymorphism, Inheritance, Abstraction. I’m going to tell you to forget that and focus on one thing. In C memory is obtained from the operating system by an operating system specific call, often called malloc. C++ introduces a new keyword, coincidentally for this article, called “new”. Allocating memory is a very common operation in large scale programming. Given a discrepancy in the library used to allocate it, you now how unportable source. Make it part of the language and you have increased portability control. Objective C is apples variant of C.
4. Java. Java takes the issue of memory in c++ one step further. You don’t need to allocate or free it yourself instead there is program which does that for you. Also another benefit is that it works with a standard library called the JRE that is pretty compatible between operating systems and does not even require re-compilation for different operating systems let alone architectures. A Windows binary in C++ won’t work on linux as the have different formats. See PE and ELF for more info. C# is windows branded java if you want to know what thats about.
Even so, I would still like to explain them. First, no memory management is required, but still, java still has that. They also come with standard libraries, but that’s not even a language issue and again java also has that. They don’t require compilation. So why not use a java interpreter such as bean script? The difference with these languages is that they don’t require static types and this is there abstraction.
Type information is less lucrative to a language which is being interpreted line by line. The fact that the source is executed in its raw form makes it more portable in the loose sense the the developers themselves does not have to compile them.
Where does this leave the future of software you might ask.
First I’d like to digress and try and justify the reasoning behind my reasoning. It seems that if the world wants mountains to be moved it requires human power. The bottle neck for this human power is not application level abstractions but platform level abstractions.
Since the proliferation invention of the web, we live in an unprecedented stage in humanity, where a person on almost any computer be it Mac, Windows or Linux can execute an application utilizing things such as maths, text, graphics(more coming as the web progresses) and run it on almost any other PC, notebook tablet or handheld.
I think the societies focus will on the attractive goal of making software portable between minds is only now starting to come into fruition. Functional programming languages are allowing coders to create more modular code which in turn is more portable within their applications and language.
The language of the future may be something akin to lambda calculus. there will be tools for converting this side effect free language to other styles and back which will give developers of different abilities the ability to contribute, read and share code. The notion of a language may even change given the revealed isomorphism between them. If you want to read code as if it was written in java, it might be something as simple as changing the syntax style in your editor.