I used to drink the kool aid, it had a nice taste, but the more time passes the more I find myself agreeing with Bart, my mentor of old. Objects are big pile of fail. The Rubyists and the Pythonistas are coming now, with their pitchforks and baling wire. But they need not worry, they will be last against the wall. But to the wall they still will go.
Let us start off by all agreeing that C++ is an abomination. Now, I know most of you will agree with me on that count, but chances are there are a few renegade cells still hanging on (masochists if they put up with me). If somehow the vastness of the FQA doesn't convince you, even Amazon knows C++ is the dumbest language on earth. One of the greatest claims to fame of C++ is that they've managed to convince everyone that an "object" is a struct with a vtable. Do you hear that sound? That screaming comes from all the stillborn projects where a dynamic message-passing/actor model of OOP would have saved years of work and millions of dollars. And rather than fix this broken model, C++ layered feature upon feature —each more broken than the last— in an attempt to cover up the smell.
The single greatest failing of objects can be traced back to this failure to even embrace the object-oriented perspective in the first place. If the original idea (or the original post-Simula idea) had been embraced instead of being bludgeoned in a back alley by a PDP-11 assembler, then the OOP paradigm might have stood a chance. It might have had a chance because it's a paradigm, an ideal of organization and style, not a bundle of language features. Language features help, don't get me wrong, but Perl has all you need, as does Haskell, as does C if you do the name mangling yourself. But without people harkening back to the elder days of CLOS and Smalltalk, they will never know what they lost.
But C++ did have one thing going for it: People Hate C++. People hated it so much they invented Java to begin hammering the final nails in the coffin. Java sucks, but Java sucks like a pre-frontal lobotomy. If you hate C++ you have verve and determination in your hatred. Noone seems to be able to give a shit enough to really work up a good hate for Java. As Jamie Zawinski says, at least Java doesn't have free()
; but that can be said for any language invented in the last 28 years.
But there are good reasons to work up a nice heartwarming hatred for Java. For one, explain Java's memory model to me. No, seriously. Perl uses five times less memory across the board, and it's interpreted. Ruby? Python? SWI Prolog? All of them use 3~5 times less memory. Prolog ferchrissakes! And don't even get me started on Haskell, Clean, OCaml, or Eiffel. Let's face it, Java's memory model sucks. And even with the JIT it's only marginally faster than these compiled languages.
One of the places Java's memory model is broken has to do with its single-minded view of objects: that everything is one (except when it isn't) and that all objects live on the heap. Oh where to even begin. Let's just say that adding light-weight tuples ("records" or "structs" to you imperative folks) as primitives to the language would increase performance by 20~50%.
While we're harping about performance, consider that setter and getter methods take 300 times longer than direct access. Not percent. Times. Why is this not optimized away? Who knows, but it probably has something to do with the Java security model which means you can't do tail call elimination or inlining. Of course, you can, in fact you can do it and still have a stack-inspection security model, just not the model Java uses. It's funny that a language so concerned with security offers no efficient mechanism for giving read-only access to fields, nor any deep read-only view of objects. Of course the whole Java ideology revolves around refactoring to small functions, it's a shame function overhead is so great. Want to know another hidden performance hole? Iterators. Don't believe me? Try writing an iterator for arrays instead of using the C-style for-loop. Want to know another one? Strings. Never, ever use +
on strings. Heck, as Jamie Zawinski says, you'd be much better off just using char[]
if you care about memory; and we all know how well that worked out for C.
But we're not here just to talk about the failure of C++ to have a defined semantics, or the failure of Java to make the bare minimum optimizations. We're here to talk about the failure of OOP. Java inherited C++'s struct-based model so it's barely salvageable in the first place. The fact that it's an IDE-only language with too much boilerplate to be readable by human eyes is besides the point. Both of these languages have failed on a catastrophic level to manage the complexity inherent in large system design. But wasn't OOP supposed to be the silver bullet with modularity, encapsulation, and reuse? Well, that's the point. If OOP was supposed to be so good at those things then why are we still here? Both of these languages enshrine the public conception of OOP and both of these languages provide too low level of a view to be useful for any kind of reuse. As was said in the OOPSLA debate, they are languages for creating bricks, but bricks do not a city make.
The struct-based notion of objects undermines the very principles of OOP. The problem is that these languages fail to decouple implementation (the struct in memory and the bytes of code) from the interface that implementation represents (the methods and semantic behavior the object represents). Semantically, it is often the case that a subtype is more specified and so has less freedom than its supertype. If we have pairs (X,Y)
as the supertype, then (42,Y)
is a subtype of it. And yet, with the implementation-inheriting struct model we are forced to have objects be of monotonically non-decreasing size as we travel down the type hierarchy. That is, even though X
is always 42 in the subtype, objects of that subtype are forced to be at least as large as their supertype. Languages like Python and Perl just use dictionaries to encode an object's fields and so they don't suffer from this struct problem, though many have leveled complaints against such free-wheeling approaches. Smalltalk had it right in that you could just gut the inheritance tree and claim to be any type you want so long as you have the methods.
So here's another problem, one that'll knock that smug look off the Pythonistas faces. If you've ever done a serious project in an OO language then you'll almost certainly have run into the fragile base class problem. The most glaring example of this problem is the idea of having an Object class from which all other classes inherit. It's okay Python, all the other kids were doing it, I don't blame you. Java has one of the most monumental examples of why this approach is entirely wrong, it's called the Cloneable interface. The idea of having a singly rooted inheritance tree is the second most blatant failure undermining the very notion of OOP. If "everything is an object" then you simply design the language so that everything is an object. Your type system says that everything is an object, with whatever that entails. Full stop. Inheritance is a nice little thing, but it is not the be all and end all of language design. What it is that makes something an object is not a collection of methods that every value in the language supports, in dynamic languages you could remove those methods anyways. Being an object means simply: being. an object. Trying to have a single class that everything inherits from introduces a bottleneck into the system which guarantees that you be bitten by the fragile base class problem. Smalltalk at least had the good sense to push the singly rooted hierarchy all the way up to MetaObject.
Anyone who's been following the developments of Java over the last decade will begin to notice a pattern compared to C++. And as any good OO developer knows, when you see a pattern you should abstract it out. Only the OO paradigm does not give the versatility to do so, instead you end up with "design patterns". But when 16 out of 23 design patterns are invisible or simpler in a functional language, that makes you wonder if these patterns are missing language features. But despite all these design patterns, you also end up with a whole lot of "features". Just consider Java's simple OO system: with member classes, anonymous classes, local classes, and nested top-level classes. Oh, and language-wide support for monitors. Don't forget the final keyword which means as many things as virtual
and static
do, oh and don't forget your annotations. Don't get me wrong, I like a fully featured language, but these are not they. These languages are designed in order to make the programmer feel smart for memorizing reams of worthless minutia. What ever happened to the concept of a small number of powerful and orthogonal features?
So back to the OOPSLA debate, because they said it all six years ago. The future of programming lies in organic programs that can fluidly evolve and can heal themselves rather than failing hard. Object-oriented programming, much as I enjoyed the promises a decade ago, cannot cope with such requirements. The OOP peddled by C++ and Java, like so much of computer science, is trapped within the shackles of modernism and the false idol of the Ur narrative. The OOP peddled by dynamic languages like Perl, Python, Ruby, and Squeak are just as confined by postmodernism and pomo's failed rebuttal of modernism. Pomo is dead, and it has been dead for years.
Modernism believes in a world where there is only a single variety of self, the monoculture. Postmodernism exploded this notion and offers a multiplicity of self, but it is just as self-centered as modernism ever was. The post-postmodernism is performativity, which holds that there is no self, there is only the enactment of a self, and since our actions are ever-changing so too is the self which is constructed by those deeds. It is our very actions which define a state of being, it is our interactions with others which defines our existence. Truth comes from the world, not from within. In the last five years performativity has moved from a sideline philosophy and has now infiltrated most of the social sciences. And so too will it be with computer science. The programs of the future will not be about the program itself, they will be about the programs' ability to interact with other programs and to interact with the world. A computer in isolation is meaningless. Our meaning, our content comes over the wires when we connect with one another. We do not play games alone anymore. We do not play them from behind computer screens. The world has infiltrated our electronic spaces and the programs of the future will exist not in computers but in that world itself. The future is event-driven, interactive, it is a dialectic about the Other, about what is outside of the Self, it is a search for consensus. The future is not about the programs that exist, it is not a time, it is a process, it is about forever becoming.
no subject
Date: 2008-09-01 02:49 pm (UTC)From:Whats utterly wrong and what are the glaring facts here ?
He's right. check out http://stephan.reposita.org/archives/2008/09/01/direct-access-300-times-faster-in-java/ for details. I got between 100-250 times speed difference in my tests.
no subject
Date: 2008-09-03 05:13 am (UTC)From:no subject
Date: 2008-09-17 10:36 pm (UTC)From:This is what I learnt from this post
Date: 2008-09-02 07:10 pm (UTC)From:Re: This is what I learnt from this post
Date: 2008-09-03 06:02 am (UTC)From:As for the untraceable reasoning, all of the reasoning in this post is contextual. Follow the links. The OOPSLA debate discusses at great length why OOP has failed. The PDXFunc meeting minutes has a few more points for the record. The FQA enumerates in profound depth why C++ is an abomination, relevant because C++ is often held up as a paragon OO language. Jamie Zawinski's post about why Java sucks, while out of date in a few areas, is still relevant even in today's age. The dumbfounding stupidity of Java's memory model is also relevant since Java is often held up as a paragon of OO. Turtles All The Way Down Please discusses the failure in Ruby's attempt to save people from C/C++'s assignment story. Adding Tuples To Java highlights more about why Java's take on implementing objects is insufficient to save the world. A Tail-Recursive Machine With Stack Inspection demonstrates that Java's superstition is wrong about not being able to inline heavily or do TCO for fear of the security model. The fragile base class problem is apparently well known according to the link you gave, and the horror that is the Cloneable interface is near and dear to every Java programmer. Yegge's post about when inheritance (so-called "polymorphism" by OO folks) fails also underscores this point.
The writing is on the wall for anyone who cares to look. I'm not here to spoon feed people the last decade in programming, but they're free to open their eyes. Frankly, if you're getting your education from a drunken blog rant rather than personal experience, a book, or a university, then your education is as suspect as is my ability to form sentences after one g&t too many. And if you can't handle some anthropology and critical theory, then this blog really isn't the place for ye.
Re: This is what I learnt from this post
Date: 2008-09-03 01:30 pm (UTC)From:Part 1.
a. C++ Never set out to be a full blown purist OO language. It was intended to be exactly as its name suggests an increment of C with OO features. You are rating it using criteria that its original author is unlikely to have shared.
b. C++ struct + vtable offering allowed a migration path for at least a hundred thousand developers coming from the C world and the author did not ever claim to be a purist about OO to the best of my knowledge, the the struct with a vtable structure made sense in the days of 120Mhz CPUs. We can live happily today in the multicore multi gigahertz environments of today but back then smalltalk couldn't even get its shorts on while the C++ programs would've completed the race. The dynamic message-passing/actor model of OOP you refer to couldn't compete in those days (it can compete today).
c. Sure you could use C and do the name mangling yourself as you suggested, but manual name mangling on a million lines of code borders on a degree of masochism I wouldn't be keen to adopt. Back when a majority of the development community was even struggling to understand OO, C++ lent a helping hand which could help people cross the river, and impure OO that it was people did reach the other side in most cases, the dynamic message passing paradigms got stuck in the vortices of the rivers until it finally took the helping hand of intel's speed blasters to slowly drag them out.
d. Java's memory utilisation is way too high. But thats just one of the many factors about evaluating Java. But thats a characteristic which may make it difficult for use in some situations (some embedded scenarios ?). You can still run exceptionally high thruput processing in 4GB RAM using 50+ threads, and thats not a difficult proposition for most servers.
e. The speed difference in java you refer to has nothing to do with the security model to the best of my understanding. It has to do with late binding. Sure one could explore things like polymorphic inline caching. But at I wouldn't ever step as far as to say this speed differential is contributing even in the slightest way to OO being a failure. Java actually runs v. v. fast.
f. In the OOPSLA debate, they are languages for creating bricks, but bricks do not a city make. Yet you cannot build a city without bricks. So these languages do not satisfy your expectation of prefab units, but the component architectures which were supposed to have delivered them have succeeded far lesser than these languages. In the meanwhile Python and Ruby do allow you to work on slightly broader constructs than bricks alone.
g. I did not see anything wrong in your example of
Part 1.
a. C++ Never set out to be a full blown purist OO language. It was intended to be exactly as its name suggests an increment of C with OO features. You are rating it using criteria that its original author is unlikely to have shared.
b. C++ struct + vtable offering allowed a migration path for at least a hundred thousand developers coming from the C world and the author did not ever claim to be a purist about OO to the best of my knowledge, the the struct with a vtable structure made sense in the days of 120Mhz CPUs. We can live happily today in the multicore multi gigahertz environments of today but back then smalltalk couldn't even get its shorts on while the C++ programs would've completed the race. The dynamic message-passing/actor model of OOP you refer to couldn't compete in those days (it can compete today).
c. Sure you could use C and do the name mangling yourself as you suggested, but manual name mangling on a million lines of code borders on a degree of masochism I wouldn't be keen to adopt. Back when a majority of the development community was even struggling to understand OO, C++ lent a helping hand which could help people cross the river, and impure OO that it was people did reach the other side in most cases, the dynamic message passing paradigms got stuck in the vortices of the rivers until it finally took the helping hand of intel's speed blasters to slowly drag them out.
d. Java's memory utilisation is way too high. But thats just one of the many factors about evaluating Java. But thats a characteristic which may make it difficult for use in some situations (some embedded scenarios ?). You can still run exceptionally high thruput processing in 4GB RAM using 50+ threads, and thats not a difficult proposition for most servers.
e. The speed difference in java you refer to has nothing to do with the security model to the best of my understanding. It has to do with late binding. Sure one could explore things like polymorphic inline caching. But at I wouldn't ever step as far as to say this speed differential is contributing even in the slightest way to OO being a failure. Java actually runs v. v. fast.
f. In the OOPSLA debate, they are languages for creating bricks, but bricks do not a city make. Yet you cannot build a city without bricks. So these languages do not satisfy your expectation of prefab units, but the component architectures which were supposed to have delivered them have succeeded far lesser than these languages. In the meanwhile Python and Ruby do allow you to work on slightly broader constructs than bricks alone.
g. I did not see anything wrong in your example of <x,y> and <42,y>. The languages meet their contracts adequately, and if they do happen to consume more memory in the process, big deal .. its going to be a speck. If it is not going to be a speck, what prevents you from coming up with a supertype (interface in java terms, abstract classes in C++) which just has abstract getX, getY and not X and Y .. you got your memory optimisation too. The inheriting struct model constraing is only in your mind and not a constraint imposed by the languages.
h. The fragile base class problem is not a necessary result of every OO design. In fact if you do understand and apply the wide body of knowledge already available especially the Open Closed Principle, Liskov's substitution Principle and Dependency Inversion Principle, you will NOT run into a fragile base class problem. Why blame the languages when some enthusiastic proponents used them to bind themselves into a corner .. but that was then (a decade ago), we have sufficient knowledge to not get into that problem today and be able to use inheritance quite successfully and powerfully.
i. I agree functional (actually more appropriately some of the newer dynamic) languages have features which can help implement design patterns in a easier fashion. But thats just one of the multiple trade off points. For your small and orthogonal features do not forget to check out Python and Ruby. And last I checked they were still OO.
Re: This is what I learnt from this post
Date: 2008-09-03 01:30 pm (UTC)From:You talk of modernism and post modernism and completely ignore the fact that these are aesthetic trends. While programmers and programs and even the language syntax may have overriding aesthetic aspects, the fact that language features per se (not the syntactic sugar) are at the end of the day very engineering driven, and what drives language adoption and success at the end of the day are economic aspects - how these languages influence the economics of the situation. So the arguments about aesthetic movements are extremely suspect imho for something that has a very high component of engineering and economics dimension to it.
You take a set of issues (in most cases real, in some cases imaginary) with OO languages and goes to town painting the entire OO movement black. There is no semblance of balance of what is good about them, there is absolutely no credit given to the the fact while theoretical guys were busy in ivory towers trying to work out something idea, these were out on the field solving many problems, and there's simply no sufficient evidence of what else could've helped solve the problems better. If not C++ / Java / Python / Ruby what else ? and more importantly how will it help me meet the real software delivery needs ? This question is unanswered. The scenario is that C++ and Java have risen to no. 1 and 2 positions today because they in many ways solved the needs of the masses and businesses superior to other options and that is not even remotely given its due. The arguments are elitist attempting to take on a wave called collective wisdom of the market - a wave they are unlikely to be able to survive however articulately they are put, for the simple reason that more often than not collective wisdom in the long term is superior to individual opinions.
Finally one can look back at anything list out the issues alone and attempt to call it a failure. But while this style of argument can be put forward it can neither be sold simply because no one will buy them. For that you need to contrast the issues with the successes, the limitations with the strengths, and then attempt to work out a reasoned value judgement. There is no talk of successes and no reasoned trade offs between the successes and failures. Therein lies the dogma.
Re: This is what I learnt from this post
Date: 2008-09-05 09:21 am (UTC)From:Certainly decisions about programming language design and implementation are guided by engineering limitations, but they are driven by far more besides. Every language has an agenda. Whether that agenda is to solve a particular domain naturally, to solve it with programs that are fast, to solve it in a particular way, to be a general tool for many domains, to provide a dynamic model of computation, to provide a safe model, to provide a bare-metal model, to open new avenues of thought, to model the real world, to resemble natural language, to be derived from minimalist principles,... every language has an agenda, and few of those agendas simply acquiesce to engineering limitations. Engineering explains how, it does not explain why bother.
If not C++ / Java / Python / Ruby what else ? and more importantly how will it help me meet the real software delivery needs ? This question is unanswered.
This question is not one I aimed to answer. Of all my rants, this one aimed only to express the fact that the OO paradigm has failed, a conclusion which even I have only reluctantly come to over the years. Implicit in this is the requirement that a new paradigm must be proffered, and hence all the people devoted to OO should explore new options rather than waiting for the economical language to change.
If you want me to sell you something, today I'll sell functional programming, tomorrow who knows. Will it save the world and make it easy to gain venture capital? No. Will it be forced to evolve over time to handle emergent problem domains and new architectural designs? Yes. Will it fail? That depends on what the criteria for success are. OO claimed to solve modularity and reuse, and we see how that worked out. The criteria FP sets for itself are still in flux, but one universal claim is the ability to catch more errors at compile-time thanks to a powerful and theoretically correct type system. This is already a reality and getting better all the time. True parametric polymorphism and the concomitant code reusability is another common claim we've already reached.
The question of what the "best" language is, is not one that can ever be answered. The definition of "best" is evanescent and ever changing as it co-evolves with a world and society that continues to grow, change, and adapt. In terms of formal power they're all Turing-complete. In terms of expressive power it depends on what you're trying to express. Is Polish or Swahili the better language for poetry? Is Tzotzil or Ainu better for discussing logic? Furthermore, the very idea that such a thing as the "best" exists belies a modernist perspective still trying to shoehorn the world into a Grand Narrative. There is no such narrative, there is no single goal that we're all approaching, some ahead, some behind, some coming from different angles. If culture and art and philosophy are to complex for modernity, so too is programming which is at once culture, art, and science.
Re: This is what I learnt from this post
Date: 2008-09-05 06:13 am (UTC)From:As someone who uses Java for governmental research, I do not agree that it is "v. v. fast". It is much faster than non-compiled languages, I'll agree. But among compiled languages it's not as noteworthy as you make it out to be. Fortran and C are notable among compiled languages, Java is a middle runner with much higher level languages quickly closing the gap. The fact that Java is unwilling to adopt optimizations such as lightweight tuples, fast field read access, tail call elimination, or reducing memory overhead all lead me to suspect that those higher level languages will continue to close the gap and will surpass Java rather soon. OCaml's already there, Clean is close behind, and Haskell continues improving in leaps and bounds. All three languages also greatly diminish the amount of programmer time required, which is generally the larger expense.
You claim that memory overheads are not important, but this is false propaganda. When dealing with research-grade programs the size of memory determines the size of models that you can run, which determines the adequacy of your results. If you don't have enough memory to use a good model, your results are worthless. If your model is too small to include all the data you've gathered, then all the expense of gathering that extra data is wasted. Swapping out to disk for lack of memory is also not an option because disk latency is far slower than any language overhead (for compiled languages). The absolute amount of memory used by a language is very important indeed for large classes of programs.
The other big reason that memory is important is because garbage collection takes time. If a language is creating too many objects then it takes a lot of time to crawl through the heap to figure out what to keep and what to dispose of. The benchmarks game doesn't give any indication of how much of memory is being used up by per-object overhead vs object churn, but churn is disastrous for time performance just as per-object overhead is disastrous for the reasons discussed above. GCed languages are many orders of magnitude better than non-GCed languages in terms of programmer time, but that does not mean that GC should be taken lightly.