The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
Can someone please enlighten me on what makes inheritance, polymorphism, an operator overloading so bad? I use the all regularly, and have yet to experience the foot cannons I have heard so much about.
If the only tool you have is a hammer, everything looks like a nail.
That's the only thing I can think to answer your question. There are some problems that are best solved with other tools, like text parsing for example you might want to call out to some code written in a functional language.
Oh, thanks then! I've heard people shred on OOP regularly, saying that it's full of foot-canons, and while I've never understood where they're coming from, I definitely agree that there are tasks that are best solved with a functional approach.
I don't think that the anti-oop collective is attacking polymorphism or overloading - both are important in functional programming. And let's add encapsulation and implementation hiding to this list.
The argument is that OOP makes the wrong abstractions. Inheritance (as OOP models it) is quite rare on business entities. The other major example cited is that an algorithm written in the OOP style ends up distributing its code across the different classes, and therefore
It is difficult to understand: the developer has to open two, three or more different classes to view the whole algorithm
It is inefficient: because the algorithm is distributed over many classes and instances, as the algorithm runs, there are a lot of unnecessary calls (eg one method on one instance has to iterate over many instances of its children, and each child has to iterate over its children) and data has to pass through these function calls.
Instead of this, the functional programmer says, you should write the algorithm as a function (or several functions) in one place, so it's the function that walks the object structure. The navigation is done using tools like apply or map rather than a loop in a method on the parent instance.
A key insight in this approach is that the way an algorithm walks the data structure is the responsibility of the algorithm rather than a responsibility that is shared across many classes and subclasses.
In general, I think this is a valid point - when you are writing algorithms over the whole dataset. OOP does have some counterpoints encapsulating behaviour on just that object for example validating the object's private members, or data processing for that object and its immediate children or peers.
Sounds reasonable to me: With what I've written I don't think I've ever been in a situation like the one you describe, with an algorithm split over several classes. I feel like a major point of OOP is that I can package the data and the methods that operate on it, in a single encapsulated package.
Whenever I've written in C, I've just ended up passing a bunch of structs and function pointers around, basically ending up doing "C with classes" all over again..
Indeed, I'd say an algorithm split among different objects is usually an indication of tightly coupled code. Every code pattern has its pitfalls for inexperienced devs, and I think tight coupling is OOP's biggest.
I don't really think it's any of those things in particular. I think the problem is there are quite a few programmers who use OOP, especially in Java circles, who think they're writing good code because they can name all the design patterns they're using. It turns out patterns like Factory, Model View Controller, Dependency Injection etc., are actually really niche, rarely useful, and generally overcomplicate an application, but there is a subset of programmers who shoehorn them everywhere. I'd expect the same would be said about functional programming if it were the dominant paradigm, but barely anyone writes large applications in functional languages and thus sane programmers don't usually come in contact with design pattern fetishists in that space.
Operator overloading is adding complexity, making code subtly harder to read.
The most important lesson for code is: It should primarily be written to be easy to read by humans because if code is not trash, it will be read way more often than written.
I would argue that there are very definitely cases where operator overloading can make code more clear: Specifically when you are working with some custom data type for which different mathematical operations are well defined.
Because an object is good at representing a noun, not a verb, and when expressing logical flows and concepts, despite what Java will tell you, not everything is in fact, a noun.
I.e. in OOP languages that do not support functional programming as first class (like Java), you end up with a ton of overhead and unnecessary complications and objects named like generatorFactoryServiceCreatorFactory because the language forces you to creat a noun (object) to take an action rather than just create a verb (function) and pass that around.
This makes sense to me, thanks! I primarily use Python, C++ and some Fortran, so my typical programs / libraries aren't really "pure" OOP in that sense.
What I write is mostly various mathematical models, so as a rule of thumb, I'll write a class to represent some model, which holds the model parameters and methods to operate on them. If I write generic functions (root solver, integration algorithm, etc.) those won't be classes, because why would they be?
It sounds to me like the issue here arises more from an "everything is a nail" type of problem than anything else.
I am very fond of the idea of "stateless" code, which may seem strange coming from a person that likes OOP. When I say "stateless", I am really referring to the fact that no class method should ever have any side-effect. Either it is an explicit set method, or it shouldn't affect the output from other methods of the object. Objects should be used as convenient ways of storing/manipulating data in predictable/readable ways.
I've seen way too much code where a class has methods which will only work"as expected" if certain other methods have been called first.