"I've read all of these books myself, so I have no difficulty believing that many moderately competent programmers have read them as well."
What a strange thing to say.
I've read all of the books in the "Claim To Have Read" list and only a couple in the "Have Actually Read" list. So based on my experience I have no problem believing that programmers at least as dumb as me have read the same.
Another data point. Of the "actually read" list I'm at 6 of 10. Of the "claim to have read", I claim 3 of 5. So an identical portion of each group.
Another opinion. If you haven't read the Go4 patterns book, you don't deserve to call yourself a real software engineer -- and it's an eminently readable book, too. And if you haven't read the "The C++ Programming Language", you should not be behind the wheel of a C++ compiler. I haven't read Knuth nor the Algorithms book, but studied the topic extensively; you really need this kind of foundation to be a full-fledged software engineer.
> If you haven't read the Go4 patterns book, you don't deserve to call yourself a real software engineer
What do you find in this book that (1) you are unlikely to find elsewhere, and (2) which are critically important to be a decent software engineer? The way I see it, we don't even need to know OO to be a good programmer. (By the way, is there a difference between "programmer" and "software engineer?)
Disclaimer: my current opinion about OO is that it mostly sucks. Alas, I don't know if it is because I know too little or too much of it.
> my current opinion about OO is that it mostly sucks. Alas, I don't know if it is because I know too little or too much of it
Unfortunately OOP won, in the face of all alternatives, so either you like it or not, you're going to need it.
As to why OOP has won ... as with imperative programming in general, it's easier to wrap your head around it without much theoretical background. I have trouble seeing a 10 year-old learning category theory to be able to read/write files in Haskell (contrary to popular beliefs, you do need lots of knowledge when wanting to combine monads).
It's all about polymorphism, which enables composability / reuse / decoupling.
In OOP polymorphism is natural. In Haskel, the only static / functional language where polymorphism is done right, the learning curve is quite high.
Languages from the ML family are very suitable for symbolic processing (theorem proving, compilers), but OOP is versatile and can be used efficiently on a whole range of problems ... including writing compilers ... http://tinlizzie.org/ometa/
You might have been burnt by the static OOP languages, trouble is OOP mixes with static typing like oil and water ... take a look at Smalltalk or at CLOS sometimes. CLOS is even more capable as it supports things like multi-dispatching.
> You might have been burnt by the static OOP languages…
I have. This is crushing. And the fact that they won't die any time soon makes me feel worse. One of my colleague even said to me with a straight face that "serious" programming couldn't be done but in C++ (if only he had omited the "but").
I think the problem of Haskell isn't it's learning curve. It's where you have to start from: scratch. Someone who know C, Java and Python won't be able to use much of their knowledge to learn Haskell. On the other hand, Haskell could be taught as a first programming course. (Like Ocaml was in my case).
Is this still OO?? The core language is based on pattern-matching! The way I see it, the OO part has been pushed to the periphery. If I do parsing in OMeta, I doubt I could claim I did it in an OO way.
What do you find in this book that (1) you are unlikely to find elsewhere, and (2) which are critically important to be a decent software engineer?
I see two important benefits from Patterns:
1. As a (partial) replacement for experience. I've been doing this for 20+ years, and I'd encountered most of it at some point before reading the book, so it's not essential in this respect. But someone just starting out could assimilate directly a chunk of what I had to figure out on my own. And even for someone with more experience, it standardizes the details of the pattern, leading to greater consistency in the software.
2. To provide a common vocabulary amongst developers. When discussing a design with someone who's read the book, I can say "I think we can solve that by using the strategy pattern", and they'll know what I mean. Without this common vocabulary we'd spend a lot more time explaining, and at greater risk of being misunderstood.
is there a difference between "programmer" and "software engineer?
This is just my philosophy, but I think that computer programming in itself is a simple activity, you could teach 'most anybody to write a program -- and that's the reason that so much software sucks. When done correctly it's an engineering discipline, including analysis, modeling, planning, documenting, and a certain degree of programming. But the actual implementation of code is a minority of the job.
The way I see it, we don't even need to know OO to be a good programmer. ... my current opinion about OO is that it mostly sucks
Given my definition above, I'd have to agree that you don't need to know or do OO in order to be a programmer. But if you aspire to the fuller job of software engineering, it would be foolish to exclude such an important tool from your repertoire.
Regardless of its actual merits, it's objectively true that a huge portion (I'd think even a majority) of development tools (including platforms, languages, IDEs, frameworks, libraries, etc.) are geared toward OO development. Eschewing those means that you're forgoing much of the foundational stuff that our predecessors have built for us (standing on the shoulders of giants and all).
I'll grant that the industry in the '90s may have been a bit manic about OO. Since then we've learned that the paradigm has weaknesses and indeed flaws. But we do generally know what those are, and have found ways to work around them. We understand now that C++ is (insanely) complex and rigid, and modern OO manages to retain most of the benefits while shedding those problems. The contemporary dynamic languages build on a foundation of OO development while avoiding many of its pitfalls (at the low level at least). And what I think of as the cutting edge, the dynamic languages, seem to have found a way to deliver their benefit while generally coexisting with the OO paradigm. Which, of course, ties back to what I said earlier about keeping your toolbox full for whatever can best solve a problem.
Now, there is still something that bothers me. OO is obviously very important. However, network effects look like they play a huge part. I am a perfectionist. As such, I wouldn't like to settle for a local maximum. For example I feel that IDE aside, functional programming with an advanced type system is better than OO for most purposes (ML and Haskell fit this pet paradigm of mine).
The problem is that I fail to see how OO could be important by itself. (Like I fail to see how Windows could survive GNU/Linux in a world where every programs and drivers ship on both.) As anecdotal evidence, I can't solve a quite universal problem: the representation of optional data. With inductive types (also called sum types, or algebraic data types), this is easy:
-- Type definition
data Option a = Nothing
| Something a
-- Example of use
case computation-that-may-fail
Nothing -> "I failed"
Something x -> "I succeeded. Result: " ++ show x
I tried, even asked, to do this in an OO way. No luck so far. So, until I find an acceptable solution, I will doubt OO is best for, well, nearly all purposes. (Note that by "not best", I do not mean "bad". I just mean we can do better.)
It's clear that you're trying to solve the problem in a functional way, and not in the way that an alternate paradigm would lend itself to. Indeed, elsewhere you admit
Someone who know C, Java and Python won't be able to use much of their knowledge to learn Haskell. On the other hand, Haskell could be taught as a first programming course. (Like Ocaml was in my case).
I think you're falling prey to the reverse of this problem: your mind is stuck in functional-land, and you need to switch your mode of thinking for success in an OO setting.
The approach to this is clunky in C++, Java, or C#. The normal approach is for the "parent" object to contain a collection of its optional attributes. One would insert new attributes into this collection, and request their values from there later. Another approach you'll see, depending on the needs, is to use generics or templates to package the attribute with sidecar information that indicates whether it's actually "there", as with C# nullable types.
In newer dynamic languages like python (which is an evolution of the OO paradigm), though, this is simplicity itself. Indeed, it's at the heart of dynamic programming, and is what I referred to in my previous post when I criticized the rigidity of C++. In python, one isn't bound by the static definitions of an interface. If you want to add an additional attribute to your object, well, what are waiting for? Just go and add that attribute to it. Later on, use duck typing to handle the presence or absence as necessary.
It depends on your definition of OO, because there isn't "a" definition of OO. Typeclasses in Haskell aren't classes, as I well know, but on the other hand it is true that they do fill in for some of the roles that interfaces do in OO languages. Fundamentally, what is an object? A collection of data and associated operations as an atomic unit. Typeclasses do fit that role. Another way to do "OO" in Haskell is:
data Animal = Animal { name :: String,
is_awake :: Time -> Bool,
lives_in :: Environment -> Bool }
Other OO things could be built as well in a perfectly functional way, picking and choosing the bits of the definition of OO you want. What you can't do is choose exactly what C++ or Java gives you, and what you don't get is a Blessed Object Orientation Technique like you do with those languages. But you can program OO just as you can in C. (Only better in most ways. Note you don't get a BOOT in C either, but there are nevertheless OO C programs.)
Mutability is merely one dimension of OO, a term that is so flexible it basically means nothing without further qualification. OO has its place even in a functional program sometimes, as that link shows.
My definition of OO is roughly the one given here: http://www.info.ucl.ac.be/~pvr/paradigms.html Meaning, doing OO is basically using closures and mutable state (let's say objects are a form of closure). I don't like mutable state, therefore I don't like OO as defined above.
We can also define OO as the use of inheritance. The problem with inheritance is that it spurs the violation of decoupling. More often than not, derived classes are tightly coupled to their base class. I don't like that. This paper also suggest that inheritance is not very good, because it would increase the error rate by 6: http://www.leshatton.org/Documents/OO_IS698.pdf (Note that I don't take this paper as a proof that OO is bad in general. Rather, it strongly suggest that C++ without templates nor the STL, used in an OO fashion, is worse than plain C).
Now, if you forbid (or severely limit) both mutable state and inheritance, I really don't see what is left to OO. We could see you `Animal` data type and the Ocaml module system as forms of OO, but at this point, "OO" would mean anything (and therefore be meaningless).
(Note: when I say "I don't like X", I mean I will avoid X as long as the resulting solution isn't demonstrably simpler.)
If you haven't read the Go4 patterns book, you don't deserve to call yourself a real software engineer
I read it, hated it, and would never recommend it for a few reasons, but mostly because I already knew Common Lisp when I read it (though I was writing Java full-time in my day job at the time) and so each time I read a new pattern I would think about how it was usually just an ugly hack to work around the poor design of a language like Java.
You might argue that a "real software engineer" will have to use crappy languages at some point and so should know the hacks to compensate. I'd argue that knowledge of a bunch of different languages makes those hacks obvious anyway - so I'd tell people to just spend time learning lots of languages instead of reading the patterns book.
The problem with GoF (haven't read it cover-to-cover, but have browsed it for inspiration) is that it is more or less an encyclopedia. It lists the discrete patterns, gives them names, as if they've come down the mountain on stone tablets.
I found a book like Head First Design Patterns to be much more bottom-up. It hammers home the idea that discrete design patterns really are just the organic side-effects of applying concepts (meta-patterns?) like "composition over inheritance", "don't repeat yourself", etc., to recurring problems in computer science.
What a strange thing to say.
I've read all of the books in the "Claim To Have Read" list and only a couple in the "Have Actually Read" list. So based on my experience I have no problem believing that programmers at least as dumb as me have read the same.