Programming paradigms are a lot like political parties -- they tend to lump a lot of disparate things together with a weakly uniting theme. You don't need inheritance for encapsulation to be useful, for instance.
The problem is, sometimes you agree with only a small part of the platform. None of these things individually are terrible ideas if tastefully applied, but it all gets clumped together into one big blob of "the right way to do things" (aka object oriented programming). I blame languages like Java for selling certain ideas as The Right Way, and building walls that intentionally prevent you from using other techniques from different schools of thought ("everything is an object, no you can't write a function outside of a class").
I think the functional paradigm has a lot of good ideas too, but in my experience they're just as annoying if they're strictly and tastelessly applied in the same way OOP principles often are.
Don't be a "functional programmer", just take the ideas that are useful.
I tend to prefer languages and tools that adopt good ideas without promoting a single specific way of thinking.
Except that there is a real advantage to using pure functional programming; being able to easily prove theorems about your code and understand different components in isolation. There is a reason why the majority of proof assistants are implemented as functional languages.
Most functional languages even give you ways of modelling imperative code (e.g. monads) in a way which hardly sacrifices expressiveness.
The real problem with OOP is not that it forces you to do things a particular way, it's that what's new about it (privileging the first argument of each procedure, inheritance, lots of hidden mutable state) is bad, and what's good about it (encapsulation, polymorphism) is not new.
The real disadvantage of pure functional programming is that it is notoriously difficult to use and thus only very few programs outside specialized domains are written in them.
For example I would guess that in a typical enterprise (say, a manufacturing company), 0.0001% of the software is written in a pure functional language.
Things OO languages 'force' us seem to be relatively easy to use: sending a message to a certain object, thinking in categories (elephants, houses, chairs, moveable objects, ...), state (position, speed, size, shape),...
Thus it is not surprising that the overwhelming majority of software written in the last 10 years has been written in OO languages.
I work in a group of teams that is mostly new college grads and nobody has trouble writing pure FP business logic in Scala. We don't go as far as doing pure FP for all effects though (although we are starting to do that more as well)
To what extent does the proving of theorems about your code play in your development process? I ask because that was singled out as one of the main reasons for using functional languages by Eutectic in the comment that started this sub-thread.
I'm going to answer this question in the larger scope of "how does pure FP help us be confident that we are pushing correct code to production?"
"Prove it" is our first line of defense. By always writing total functions, we can be confident that our services and jobs don't fail in uncontrolled ways. "Proving of theorems" is what you end up doing as you write total functions. NonEmptyList, ADTs, Options, Coproducts, These, singleton types, sized collections. All these things can provide powerful proof that is used to write total functions. The proof that various types provide makes it easier to write total functions as we can use Scala's powerful type system to greatly restrict which programs we even have to think about. And (thinking in reverse), forcing yourself to write pure, total programs ends up forcing you to prove various things about your program in order to achieve those properties.
So all in all, we perform the "proving of theorems" all the time. It just doesn't look or feel like it. But a large portion of CR discussion is about the types used in a program, which is in effect a discussion about which theorems we think we should prove before we push the code to production.
In my experience, business logic is often relatively easy to express in pure FP. Where things start to get trickier is the boilerplate and boundaries, e.g. I'm writing a Rails app at the moment where the business logic is in pure functions, but the boundaries with other systems like ActiveRecord and external webservices tends to end up being more object-oriented.
For example, the ruby library I use to connect to webservices is very OO, and requires you to derive from its base class to build each webservice client.
This is of course very natural in Ruby, but I do see lots of OO in Scala libraries too (e.g. Spray).
> In my experience, business logic is often relatively easy to express in pure FP.
Agreed. This makes it a pretty easy sell to get everyone to commit to using immutability and purity and all those nice things in the business logic layer.
> Where things start to get trickier is the boilerplate and boundaries, e.g. I'm writing a Rails app at the moment where the business logic is in pure functions, but the boundaries with other systems like ActiveRecord and external webservices tends to end up being more object-oriented.
We're in the same boat, except in an in-house Java-based service framework. The "shell" of the program reads from the DB/makes service calls to gather input, passes this input to the pure business logic layer, and then based on the output throws exceptions, writes to the DB, makes other service calls, etc. This conscious division between effects and logic alone makes programs better.
In areas where we need concurrency or have more complicated effects we wish to unit test, we have started to use scalaz.concurrent.Task and scalaz.Free.
Nope we mostly come from either state schools of our original residence or small liberal arts schools. I know of two hires from those schools, and I am the one who taught them pure typed FP.
I'm the only one who came in knowing pure FP at all and I didn't learn it via classes.
That's a bit biased. Berkeley and MIT students are pretty pragmatic, and are not typically FP enthusiasts (as in, many aren't even if some are on their own accord). Many European universities have more of an affinity to FP, comparatively speaking.
Yes but the majority of excel sheets consist of calculations. The source code of many OO programs will also consist of pure functions if that program is focused on calculation.
So calculations are easy to express as pure functions, everybody knows that.
On the other hand, if you want to make an interactive application in excel you use VBA which exposes a large and rich set of objects.
The paper deals with more examples of calculations: convert fahrenheit to celcius, calculate volume. It does not cover interactive applications. I hope you are not redefining the problem to fit the solution which can be a weakness of pure mathematicians and functional programmers (and negative though it is I have to add: Its a waste of everybody's time. It would be much better if people were direct about what their solution addresses instead of leaving it vaguely defined in the hope that people will think it is of practical use for more than it actually is)
Yes, Excel is limited. Yet, people manage to solve lots of business relevant problems within its constraints.
Excel is very interactive: you can change input and immediately get a different output. (I think you were trying to talk about a different restriction. Excel is very limited in the kinds of inputs and outputs you can make, and even more so in the kind of side-effects you can cause.)
99.99% of what people do with Excel does not qualify as "software", unless you classify "A1+C3" as "software", in which case calling it "functional" becomes meaningless.
Oh God, if you saw the Excel workbook my previous boss created as the back end for the financial calculations used in all our applications you'd scream in horror. Like just about every contingency in the process was done within the excel workbook and not just as a wrapper around some simplified formulas.
Actually, from what I've seen, pure FP forces people with less experience into design and structures of their code that they'd only do in imperative settings with considerably more experience.
Eg when I asks students to do some simple exercise like "write a function that finds the shortest path through a given graph", in impure languages their solutions tend to return a path if they find one, but just give up with eg a message printed to screen when there's no path.
In pure and types languages, I see more people reaching for a Maybe or Either return type to express that the function as asked for might not succeed.
Funny choice of exmple, the algorithms I am aware of for finding shortest paths require mutable vertices that store the scores and pointers (edges) for the best part found so far.
No doubt there ways to do it in a pure functional language -- but that requires a bit of thought. Compared to that difficulty, this business of return values seems trivial.
Worse, a language that forces people to Do The Right thing, might be acceptable for experienced programmers who want to work within that limitation. But when given to student, it just encourages closed minded rule-following.
> Worse, a language that forces people to Do The Right thing, might be acceptable for experienced programmers who want to work within that limitation. But when given to student, it just encourages closed minded rule-following.
Citation needed. I find the safety net of that kind of language empowering: I can concentrate on thinking about the actual problem, and be confident that the language won't let me shoot myself in the foot.
>Worse, a language that forces people to Do The Right thing ... just encourages closed minded rule-following.
I think experience shows that freedom to do the wrong thing frequently leads to the wrong thing happening. There are many case studies to be found in security and concurrency.
> [...] the algorithms I am aware of for finding shortest paths require mutable vertices that store the scores and pointers (edges) for the best part found so far.
No problem. Traditional loops with mutable variables translate straight-forward to tail recursive calls, if you want to write your algorithm like that.
Though actually my argument was less about not mutating your functions internal variables---but about interacting with the world outside your function via a well defined interface. (In this case, via arguments and returned values.)
The most 'mainstream' language that encourages such a strong adherence to declared interfaces is Haskell. But the imperative D can do something similar:
"In a slightly less precise way, this means that pure functions always have the same effect and/or return the same result for a given set of arguments. As a consequence, a pure function for example cannot call other impure functions, or perform any kind of I/O (in the classical sense)."
Haha. No worries. Mine is also just an observation I made with the few people I'm mentoring and teaching.
Also, they seem to grasp recursion much better via Haskell than in Python or Java. I believe that's mostly because Haskell makes structural recursion real easy (ie recursion along the structure of your data type, eg like along a tree)---and once they are comfortable with the basic idea from that, other patterns of recursion are easier to understand.
Algebraic datatypes and pattern matching on them are a great benefit for that structural recursion.
There are things that are painful to express in a pure functional language. I'll give you an example: rasterize a triangle to a mutable buffer. (An inherently mutable and stateful algorithm). You can certainly do this in a purely functional language, but I doubt it will be any clearer or more performant than its procedural C counterpart.
I think pure functional programming is super useful in other contexts, for instance, a compiler. I'm weary of people suggesting that a specific school of thought is universally applicable, though.
While I agree, within the same context, I want to point out that right next to rasterization you have shaders, which are a poster child for functional programming.
A vertex shader just transforms an input vertex (and some other inputs) into an output vertex, and a fragment shader just transforms an input fragment (and some other inputs) into an output fragment. This is done at a scale so massively parallel that we have dedicated hardware for it, because your CPU doesn't have enough parallelism.
Sure, HLSL and GLSL are locally procedural. The most recent fancy versions ofo them even allow for some shared state mutation within a thread wave, useful for certain niche uses. But in in their traditional contexts? I find even local mutation for most use cases to merely be a source of bugs and obfuscation via poor naming. And most of the built-in functions are pure.
Just wanted to add that shaders lend themselves very well to the functional / dataflow paradigm. There are numerous visual node-based shader editing tools where you construct shaders from blocks, which essentially are pure functions.
> Except that there is a real advantage to using pure functional programming; being able to easily prove theorems about your code and understand different components in isolation.
And how often does that occur in practice for most of the programs people write? Even the most hardcore Haskell programmer isn't going around proving theorems about their modules beyond what the type system can provide for free (and that is true of statically typed OO also).
> The real problem with OOP is not that it forces you to do things a particular way, it's that what's new about it (privileging the first argument of each procedure, inheritance, lots of hidden mutable state) is bad, and what's good about it (encapsulation, polymorphism) is not new.
Encapsulated state has its own benefits and drawbacks (having explicit unencapsulated state limits reuse as they are reflected in type signatures), and anyways, is not unique to OOP (any commonly used language sans Haskell supports encapsulated state). The only way to get encapsulated state in Haskell is World -> World, which would pollute all function signatures it buried through (there is some good work in parametric effect systems, but its still early).
OOP also is not very new, being formed from the culmination have patterns used in the 70s, and is about the same vintage as FP.
The "proving theorems" stuff is definitely overstated. People always toss that term around, and it's true in principle, but in practice hardly anyone ever uses correctness proofs in real world code.
But to flip it around and make a less grandiose version of the same point: the majority of bugs are to do with state. (I don't actually know if that has been "proved" but it's definitely what I find in practice!) Pure FP gets rid of a whole swathe of potential bugs at one stroke. Of course it makes programming harder in some other respects, as often it can be hard to figure out a pure function that's as efficient as the obvious imperative algorithm, or even to figure out how do a state-y thing in FP in the first place.
I agree with this mostly. You can get rid of state in OO programs, and that often involves a bit of functional programming. Struct/values in C# help a lot in keeping the performance up (well, if GC is a problem, you can also inline your closures to prevent boxing...lots of crazy things like that). Essential (rather than accidental) state is impossible to eliminate in any case.
> And how often does that occur in practice for most of the programs people write?
Very frequently. The simple theorems that equational reasoning supports are very convenient for refactoring and optimizing code. Understanding components in isolation is useful for testing, isolating bugs, and refactoring of subprograms independently from their contexts. Since I do testing and refactoring frequently, I make good use of what FP offers.
Sure, this isn't equivalent to full 'correctness' proofs. The lightweight equational reasoning FP offers for expressions and behavior doesn't prove you have 'correct' expressions or behavior. But it doesn't take theorems of correctness to be useful.
> that is true of statically typed OO also
Not really. Statically typed OO doesn't prove nearly as much about the behavior. This is mostly due to potential for aliasing of stateful sub-components. If you have capability secure OO, like E language or NewSpeak or Joe-E, that can help a lot, but the potential for hidden communication via aliased components still hinders a lot of useful equational reasoning.
> only way to get encapsulated state in Haskell is World -> World
That's simply untrue. Encapsulated state in Haskell can be modeled by a variety of types. One is `Machine i o = i → (o, Machine i o)`. Such machines can be composed, each component machine having its own internal, encapsulated state.
> OOP also is not very new, being formed from the culmination have patterns used in the 70s, and is about the same vintage as FP.
I'd say pure FP is a lot younger. The idioms for purely functional expression of IO simply didn't exist in the 70s. E.g. even in the mid 80s, you had languages like Miranda with impure 'readFile :: FileName → Bytes'. The Clean programming language using uniqueness types for IO was a new thing in the mid 80s. The modern use of monads harkens to Wadler in the mid 90s. I tend to think of `pure FP` as more a mid-90s paradigm. It really is a different paradigm - a different way of thinking about and constructing programs - than the ad-hoc impure FP of Lisp and OCaml.
> Very frequently. The simple theorems that equational reasoning supports are very convenient for refactoring and optimizing code.
I was generalizing across all programmers. Of course individual cases may vary, and it really depends on the code that you write!
> Not really. Statically typed OO doesn't prove nearly as much about the behavior. This is mostly due to potential for aliasing of stateful sub-components.
Haskell simply avoids aliasing with value semantics, and even then you can add it back and get the same problems.
> That's simply untrue. Encapsulated state in Haskell can be modeled by a variety of types. One is `Machine i o = i → (o, Machine i o)`. Such machines can be composed, each component machine having its own internal, encapsulated state.
I'm not disagreeing. My point is that you have to bury the state in something else, which is then exposed via a signature (well, in Haskell which supports this, of course).
> I'd say pure FP is a lot younger.
We could debate a lot on what pure FP is, what pure OO is, and who climbed the ladder faster. But the fact that paradigms developed around the same time is probably due to the programming field maturing in that time frame.
I think the relevant question is whether functional programmers, not all programmers, regularly leverage the lightweight equational reasoning, refactoring, and context-independent behavior that is available with purely functional programming. I believe most do.
> avoids aliasing with value semantics, and even then you can add it back and get the same problems (...) you have to bury the state in something else, which is then exposed via a signature
In a programming model without side-effects, it is necessarily the case that all 'effects' are modeled in the call-return behavior. The type signature, too, for a strongly typed language. And I agree that, upon modeling state or aliasing, we get to deal with not just the feature but all of its associated problems.
OTOH, the problems of state and aliasing don't implicitly leak into every subprogram. The precise control over effects can be very convenient.
You couch that control in pessimistic terms like "pollute all function signatures it buried through". But in practice I've never had difficulty anticipating where I'll need to model effectful behavior, nor with extending the set of effects as needed. Oleg's recent development of freer monads with more extensible effects [1] is compelling in its potential to remove remaining drudge-work.
> We could debate a lot on what pure FP is, what pure OO is (...) paradigms developed around the same time
Pure FP (programming with mathematical functions, no side-effects) and impure FP (i.e. first-class procedural programming) are essentially different paradigms. They require different patterns of thinking, reasoning about, and constructing programs. Despite the ongoing battle over the "functional programming" branding, it isn't wise to conflate the two paradigms. It was impure FP that developed around the same time as OOP. Pure FP is about twenty years younger, more or less.
(The mention of 'pure OO' seems an irrelevant distraction. Do you believe pure OO vs. impure OO, however you distinguish them, require significantly different design and development patterns and are thus distinct paradigms?)
> I think the relevant question is whether functional programmers, not all programmers, regularly leverage the lightweight equational reasoning, refactoring, and context-independent behavior that is available with purely functional programming. I believe most do.
This begs the question of who is a functional programmer, and how typical are they? I know a few FP programmers who are able to stay in the abstract world for a long time, thinking symbolically, equationally, and don't need petty things like concrete examples (most of the members of WGFP, for example). Then there is the rest of us!
> In a programming model without side-effects, it is necessarily the case that all 'effects' are modeled in the call-return behavior. The type signature, too, for a strongly typed language.
You can always default to World -> World in a pure language, and ya...technically you don't have side effects anymore, but for all practical purposes you do! For this to be useful at all, you have to keep your effects fine grained, and for functions that call other functions (like a general ForEach), effects have to be parametric as well, or you wind up polluting everything (or worse, being unable to express something).
Pure FP culminates from a bunch of experience in the 70s (I'm not talking about Lisp), which happens to be where OO came from as well. Pure OO doesn't really make sense...OO can't be pure but rather organic.
Methinks you misunderstand the programming experience of FP. A nice property of 'equational reasoning' is that, as a universal property of code, I don't have to think about it. It just becomes a law of code - conservation of meaning. Abstraction, splicing, refactoring, and reuse happens easily and fluidly because I don't need to think about whether those actions are safe or correct. It isn't about hand-wavy abstract worlds at all, but rather concrete manipulation of source code. Concrete examples are also very common and useful with pure FP, e.g. use of REPLs is common.
A pure `World→World` effects model is NOT a practical equivalent to introducing side-effects. Unlike with side-effects, it's trivial to constrain a subprogram from access to the whole World, e.g. use divide and conquer tactics to hide parts of the world, or limit a subprogram to a constrained subset of monadic commands that an interpreter uses to access and update the world in limited ways.
No experience from the 70s left academics or industry prepared to support purely functional IO models. It should go without saying that you can't have a complete programming paradigm without an effective story for IO. Miranda, an attempt at a purely functional language of the mid 80s, made a valiant attempt at pure IO, and failed. Even early Haskell, up through the early 90s lacked an effective, coherent story for IO. I find your repeated assertions that pure FP was somehow a child of the 70s to be very dubious.
REPLs don't help out with refactoring, splicing, or even abstraction, at least with the current tool chain we have.
IF you add a World->World effect to your function, it is the equivalent to saying it has unconstrained side effects, is it not? It is the practical equivalent, even if its implementation can pair off the world as needed.
Hindley Milner, Landin, ISWIM, APL, list comprehensions, etc...are all children of the 70s. The IO problem wasn't solved until Wadler in early 90s, but then again we didn't get type parameters in OO languages before then either.
Wat? It's equational reasoning that lets us manipulate concrete code easily. REPLs are just proof that functional programmers like working concrete examples, too.
A World->World function is unconstrained in its effects upon the given world, modulo substructural types. But it is not a "side effect". Among other important differences, your program can't describe an infinite loop with observable effects on that World.
I don't consider type safety, list comprehensions, etc. to be essential for pure FP. Immutable values, first class functions, and explicit effects models on the call-return path (instead of side effects) are the critical ingredients. Comparing the essential IO problem to "type parameters" (which aren't even a problem for a rich ecosystem of dynamically typed OO) seems disingenuous.
> And how often does that occur in practice for most of the programs people write? Even the most hardcore Haskell programmer isn't going around proving theorems about their modules beyond what the type system can provide for free (and that is true of statically typed OO also).
"What the type system can provide for free" isn't static. Part of working in that kind of language is structuring your code such that important properties end up being proved by the type system.
It rarely rises to the level of a "theorem", but "is this piece of code equivalent to this other piece of code?" is what I'd venture to suggest programmers spend most of their day asking, and Haskell-like languages make that easier to answer.
> "is this piece of code equivalent to this other piece of code?" is what I'd venture to suggest programmers spend most of their day asking, and Haskell-like languages make that easier to answer.
It is a boring question to ask, and one that doesn't come up much in my work. Does this mean Haskell-like languages are not appropriate for my domain?
It means your domain is alien to mine, and I can't tell you what is or isn't appropriate.
For me programming is about automating some business process or at least something you already know how to do, so it's a case of: 1) transcribe the doing into plain code 2) factor out patterns/redundancy in the code. (In practice this is interleaved, and 2) results in building up a language for expressing the domain in code which in turn simplifies 1) - indeed, when adding new functionality it's often a case of first transforming the current code into a representation that makes the new functionality simple). 2) is where the vast majority of the work lies and what takes up the time. Just as good writing is good editing, good coding is good refactoring. At least in my domain.
I do research working on new experiences, like live programming environments. Nothing is really known, there is no spec, everything is a design prototype, code is disposable in a fail-fast kind of way. So I'm pretty extreme, I can see where other domains are very different.
As a weaker form of going around and proving stuff: people do care a lot about properties. We just check them via QuickCheck, instead of formally proving them.
And we organize our programs such as to get nice properties.
It may be weak, but by hell it's practical. Tons of tooling, tons of engineers who understand it, and with a pragmatic approach, eliminates many bugs and regressions.
Any sort of thinking about what the code does or does not do is essentially proving theorems about it, is it not? It seems to me one does that all the time when debugging.
I don't think about coding that way. To me debugging is more like a Sherlock Holmes investigation rather than a formal theorem proving process. I guess we maybe work on different kinds of programs.
You're still, at some point, simulating the steps of some abstract machine in your head to understand what the debugger is telling you.
The simplest case is replacing an expression with its value, given an environment of lexical bindings that are apprent from the source program. Not much investigation necessary. That's FP.
For OO code, you just need to keep track of a lot more context: the state of the receiver of the currently executing method, its heirarchy of parent classes, the runtime class of each object because late binding is pervasive. Of course you can write code that doesn't use any OO features, but the languages clearly aren't designed for it. See: any number of "functional C++" articles.
And, of course, you can get the same kind of highly dynamic behavior in FP languages by explicitly using open recursion, higher-order state and hiding everything behind existentials. But very few codebases do that because the vast, vast majority of the time just one of these features is enough to solve a problem.
OO context is internalized linguistically, via lots of metaphors. Those metaphors can lie, of course, but your brain can apply abstractions to the state of the machine to deal with its complexity, for better or worse. Ideal FP debugging, which I don't think exists in practice, relies on ultimate truth with equational reasoning realized through techniques like referential transparency. In the ideal case, you just reason about the equation and there are no hidden surprises, fuzzy metaphorical reasoning is minimized. In practice, there is still plenty of metaphorical reasoning going on for any non-trivial program as even explicit state necessarily becomes implicit as it increases in quantity (our brains can't handle seeing everything even if it is technically all in front of us).
OO thinking optimizes for the less ideal cases that are far more common. Ideal FP has this ideal mathematical view of the world that rarely pans out in practice, while less ideal FP just resorts to OO-style metaphors and abstraction in practice. For the kinds of experiences I work on (heavily reactive, lots of state, complex interactions), this works well for me, and we are developing techniques to make it less painful (live programming). "Worse is better", as RPG would say.
It sounds like we agree that lexically scoped immutable values are easier to understand, either precisely or with fuzzy metaphors. I'm not sure what "ideal" FP is or how it might have a certain "mathematical view of the world". The features I mentioned are just a subset of semantics that every high-level language programmer already knows, but are pointlessly hobbled in popular languages.
Why should expressing something basic like a tagged union require a detour through the quirks of a particular object system? Ditto for polymorphism, modularity, etc, etc. Clearly we disagree on how useful objects really are in practice, fine. Why not add these domain-specific features on top of a core language with simple semantics? It worked fine for lisps. Luckily, after a couple of decades of "everything is an object!" nonsense, that seems to be where newer languages (Rust, Swift) are headed.
You mean CLOS? This is exactly the context RPG coined "worse is better".
Languages are not so much a collection of features but mindsets. So polymorphism, dynamic dispatch, subtyping, etc...do not define OOP so much as they are leveraged by those languages to enable reasoning with names and metaphors. Calling them just domain specific features misses the point like talking about some dish only as the sum of its ingredients.
Tagged unions and GADTs are quite different in expressiveness and modularity, I still remember the mega case matches used in scalac. Should one function really be given so much functionality when a layered design with several virtual method implementations would be much more amenable to change and modular reasoning? Well, I guess it's a matter of how you view code.
Languages are not so much a collection of features ...
If you want to define objects precisely, even just to have a language spec, they are absolutely made up of sums, products, recursive types, etc. Whatever useful metaphors one might have to work with objects doesn't change what they actually are. If you give the programmer access to these building blocks, you get ML.
Calling them just domain specific ...
I meant that the particular way they're combined to get OOP is domain-specific.
Again, I'm just asking: why mess with the basics? Why require encoding simple, universal concepts in terms of an ad-hoc object system? You can still have an object system on top, if you find that helps with the really complicated cases, but why encode the parts of your code that are simple in terms of much more complicated, derived concepts?
> Except that there is a real advantage to using pure functional programming
That's only true for certain values of "real". In real "real" terms, the advantage is purely virtual - even very simple programs require hundreds of pages of proof. In effect, formal proofs are almost never done in real reality and the "advantage" remains purely virtual.
> what's new about it (privileging the first argument of each procedure
Multimethods, CLOS, Dylan?
> inheritance
Don't use it if you don't like it that much...
> lots of hidden mutable state
It's not hidden, it's encapsulated. Do you know Smalltalk? If not, you should try learning it; I liked Pharo very much. Look at Morphic, work with it for a while and you'll see the difference. You can also try devising purely functional equivalent of Morphic while you're at it (good luck!).
For some values of "pure", sure. The problem is, keeping this level of purity - that allows to strictly prove theorems about your code - is very hard. And most real code would compromise, while still pretending that they are pure and their results are proven. That's not a road to a good place.
I seriously doubt there's much industrial code made in functional languages that has been proven to be correct. And I'm not talking about halting problem, etc., not that deep, I mean just writing a spec of what the code would do in every situation and proving that indeed it does that. I haven't seen any. Maybe in theory it'd be possible, in practice, it doesn't happen. In fact, if that happened frequently, unit testing in functional environments would not exist - you don't need to test what you can prove to be correct. In reality, it very well exists.
> There is a reason why the majority of proof assistants are implemented as functional languages.
It's not the direction you need to prove though. Surely, it is easier to express things we can prove in functional terms. The challenge, however, is to see if it's easy to prove
that code follows certain specs when that spec is not specially fit for being expressed in functional terms.
> it's that what's new about it (privileging the first argument of each procedure, inheritance, lots of hidden mutable state) is bad
Hidden mutable state is not mandated by OOP in any way. And arguing that inheritance and special arguments are bad would require some more than just saying "it's bad".
> and what's good about it (encapsulation, polymorphism) is not new
So what? Where's the problem if it's not new? Letters we're using aren't new, so aren't numbers, still serve us well. Not everything good must be new, if it's old and still good - even better. I fail to see how "it's not new" is any meaningful criticism - it's like criticizing a math library that the result it returns for 2+2 is not new. Why would we want it to be new?
"Pure" functional programming isn't the only alternative. There's plenty of benefit from just using a language with reasonable semantics (e.g. ML) that doesn't force you to contort simple programs into bizarre "OO" patterns.
Whether or not you want your language to enforce segregating (most) effects is a separate issue, about which reasonable people can disagree.
>>Except that there is a real advantage to using pure functional programming; being able to easily prove theorems about your code and understand different components in isolation.
How much of the Haskell (a pure FP language) library code is formally proved? Just curious.
Also I wonder, if code like xmonad can be proved at all.
The problem I have with the functional paradigm is that it's couched in the idea that all problems can be reduced to same kind of algorithms. It's better to accept that all problems can be reduced to some king of algorithm whether or not it follows neatly along with the functional paradigm. That's why OOD sometimes works better for some problems over others and vice versa. Plus, I think not all functional programming languages are equal in this regard. I have a bias toward Ocaml since it's fairly easy to get into it without having to abandon previous knowledge. You can learn Ocaml even if most of your professional life you've written code in C. Whereas Haskell you have to unlearn lots of old idioms which worked well in C to get a basic comprehension of Haskell. So, just picking up Haskell because you think a specific problem is easier within the functional paradigm is a bad idea IMO. Unless you have plenty of time to learn then go for broke.
I don't believe FP has any strong assumption that all problems reduce to the same kind of algorithm. Rather, different kinds of algorithm are modeled typefully - e.g. with various monads or abstract data types.
As you mention, pure FP does require learning different idioms. OTOH, a lot of those idioms can be applied effectively to imperative programming, so learning isn't necessarily a wasted effort even if you spend most of your time wrist deep in C code.
I see you point. I just think there's better options if you're going to approach FP. I think picking a language that's flexible about it's enforcement is one approach. I think I've learned more from Common LISP and Ocaml about FP than I ever did from Haskell. It's not to say Haskell is a bad language, but it's just not one you can pick up in a day.
> pure functional programming; being able to easily prove theorems about your code
I believe you're referring to a type system. That is different. There is little about functional programming that allows you to easily prove theorems about your code.
No, FP is great for analyzing your code! Internal state and side effects are the main things that make code hard to analyze, and it's exactly those things that FP avoids.
You can view a programming language like a fighting weapon ( although I don't promote violence ). If the enemy is right in front of you, you may kill it easier with a sword then with a nunchaku. But if he is around the corner, you may have better chances with the nunchaku. Or if he's at a distance you may use shuriken. One weapon can't offer all benefits, because it becomes impractical or dangerous.
The reason why an OOP language doesn't let you have functions outside a class, may be the same with why you don't have blades on a nunchaku. You may hurt yourself.
One needs to understand the strenghts and weaknesess of each weapon, and when it's better to use it.
> Don't be a "functional programmer", just take the ideas that are useful.
But it's not about taking. It's about giving. What interfaces do you provide to someone else? Are the datatypes that you define mutable? Do the functions that you expose behave statefully? Do they modify the outside world?
All of these things are things that functional programming (I guess I'm thinking of Haskell-alikes specifically) nudge you away from. Functional languages take power away from you to give power to someone else: the consumers of what you write (who may be youself in the future).
1. Inheritance creates dependencies on their parent class
2. Multiple inheritance is hard
3. Inheritance makes you vulnerable to changes in self-use
4. Hierarchies are awkward for expressing certain relationships
All true. But likewise, functions introduce dependencies on their arguments, and data structures introduces dependencies on their fields. You must consider your dependencies carefully when designing any software interface.
The task of software architecture is not to go around categorizing everything into a taxonomies. Inheritance is just one tool in your software interface toolbox.
5. Reference semantics may result in unexpected sharing
This has more to do with reference semantics than objects.
6. Interfaces achieve polymorphism without inheritance.
Interfaces long for inheritance-like features. For example, see Java 8's introduction of default methods, or the boilerplate involved in implemetning certain Haskell typeclasses.
I think that inheritance and composition are not the interesting aspects of object oriented design, or of any language or paradigm. These are tools for code brevity / reuse / elegance. Advanced copying or ctrl-c.
I would also argue that too much elegance in any language or design paradigm faces the same problem, in that elegance is often (but not completely) in tension with modularity or granularity of control, which is an engineering subgoal, because a common engineering situation is to experience system change (feature growth or reorganization of code), and improvements in granularity or modularity makes system restructuring easier.
I think that object oriented design is more essentially about modelling distributed state, because you probably have multiple objects with their own internal state. I believe this means that object oriented design is highly concerned with protocolized communication and synchronization between distributed states, whether via messages or channels or something else.
I believe that in distributed situations, object oriented design can be very harmonious with functional reactive programming strategy. You can easily and usefully have a situation where an object functionally updates its internal state with a typed stream of inputs.
That is a good analysis. While I was reading this article all I could think is "You wanted to do things in a bad way and then you learned how to do it the right way and you don't like the right way?"
His entire problem seems to be he thought OO was a magic bullet he could do whatever he wanted with and then he learned there was more to using OO than the three concepts he cites at the beginning.
And this guy has supposedly been writing in OO languages for decades? What?
> And this guy has supposedly been writing in OO languages for decades? What?
This is the bit i don't get. It's like he learned OOP in the '90s, when everyone thought inheritance was rad and nobody had realised how terrible mutating shared state was, and then fell asleep for twenty years. None of this article, none of it, has any relevance to how OOP is practiced by informed people today.
The keyword there is informed. There are still plenty of shops that practice OOP this way. I'm working at one right now and there are OOP horrors around every corner.
Unfortunately, they seem intent on adding more mutable state rather then eliminating it.
Quite a few, but the worst would be the custom logger. I got lost trying to trace the inheritance graph. which spans projects, and the dependency graph, loggers within loggers within loggers. I gave up when I ran out of space trying to sketch the relationships in my notepad.
Being encapsulated means you don't actually have any control over the logger, or the threads it spins up. Creating a logger for a new app requires inheriting from a application logger (which is already 3 or 4 layers deep in the inheritance hierarchy.
The distinction other loggers make between the log interface and the appenders are non-existent. If you want to log to a new source (say the event log) you have to add a new layer to the inheritance tree.
Then there's the fact that it's handling the application state, in an "on error resume next" kind of way. And this is a global state, so don't even think about multi threading.
Naturally, actually accessing the logger is done via a singleton.
It causes more problems than it helps resolve, and the ones it causes naturally don't have an diagnostic information available.
There are plenty of others, but it's hard to top this tour de force of OO anti patterns. If only there was some free and stable alternative...
I read it slightly differently. The author has an internal notion of what "good" programming is, that appears to contain the concepts "compact", "correct", and "uniform".
For compact, a "good" program converts the expression of intent (the source code) into the execution (executable) into a package of data in which both statically (being loaded, being resident) "good" is defined by the minimum use of resources, and dynamically (while running) it minimizes its resource usage (and thus is "fast").
For correct, a "good" program works the same way every time you compile it and run it. Good in this case would be systems which notify you of changes which can affect the existing compiled code.
And for "uniform" the same syntax or tools are applied in the same way for all idioms. "Bad" here is the number of special cases that have to be accounted for (or inversely good is the lack of special cases).
As he wrote, the "promise" of the object oriented approach to programming has not been fulfilled according to a set of criteria that the author came up with internally. That sums to an opinion of dislike based on an internal metric.
That said, since I share what seem to be his unidentified metrics for "goodness" I found myself agreeing with his statements. I recognize though that there are many metrics which others may choose to rank more highly than those so rather than "failed promise" I'd characterize it as "It hasn't worked out to provide a better solution for me."
There is no good evidence that OO is easier to code and maintain than some alternative. In my experience (which is, of course, only anecdotal), the authors' skills are by far the strongest determinants of maintainability. In the wrong hands, OO features are just additional dimensions for obfuscation.
You all have missed the point that the author is well aware of what needs to be done to avoid the pitfalls of OO and be productive in an OO language. His point is that OO is still being taught, promoted and justified with the same simplistic claims and assumptions that were made in the 90s.
It needn't be. The awkwardness of multiple inheritance in C++ comes from the fact that C++ classes try to be both classes and abstract data types, and end up not fulfilling either role in a satisfactory manner. In OCaml, where abstract data types and classes are completely separate and unrelated mechanisms, multiple inheritance is the most natural thing in the world.
> 4. Hierarchies are awkward for expressing certain relationships
Hierarchies are totally fine. The problem is tying vtables to individual objects. This deprives you of the ability to say “this virtual method operates on two objects of unknown types”, because the only type that can be abstracted is that of the object carrying the vtable. Haskell-style type classes and CLOS-style generic methods don't have this problem.
> 5. Reference semantics may result in unexpected sharing
Unexpected sharing is only a problem with mutable data.
> This has more to do with reference semantics than objects.
How many languages that call themselves “object-oriented” don't equip all objects with a first-class identity?
> 5. Reference semantics may result in unexpected sharing
> This has more to do with reference semantics than objects.
I would argue that these are intertwined. You can't really have objects without aliased identity and therefore reference semantics. Otherwise, you are in "land of the anonymous values" where FP techniques are much more useful, while objects clearly have names. (I'm an OOP fan BTW, but also heavily use FP.)
I think the functional vs OO debate is being done with a very narrow point of view.
Functional came before OO and there are reasons why it became much more popular- it had much better, easier and simpler solution to the most common problems of the 90's and early 2000's, namely handling GUI and keeping single process app state (usually for a desktop app).
It fares much worse in today's world of SaaS and massive parallel computing.
Frankly I think the discussion will be much better if we debate the merrits of each paradigm in the problem domain you are facing, rather then blindly bashing on a paradigm that is less suited to your problem domain.
For instance I have yet to see an easy and simple to use (and as such maintainable) functional widget and gui library.
But that's not a library. It forces your whole project into a box in which it may not fit. Elm is not without problems as well, although they (him?) are working on the biggest pain-points.
Isn't that the point? Since Elm is a pure, strongly typed language, it introduces a lot of constraints. You can't mutate anything, there are no side effects (only declared effects captured by the type system), and the only architecture you can use is "The Elm Architecture". However, those restrictions will keep your code sane and the compiler almost guarantees that your code will work. It's a pretty fair trade-off if you ask me.
I think the reason is much more fundamental - even now functional languages are slower - even with all the fancy modern compilers, optimizers, GCs that were not even in the same league 20 years ago or more.
OO maps nicely to shared memory model and it's fairly low overhead compared to things like persistent collections data structures. This helps when you are forced to deal with lower level stuff.
Nowdays we are IO bound, we moved from single shared memory machine -> distributed services communicatig with messages and the problems are more about data transformation rather than HW management. OO is cumbersome in this context.
Functional GUI is easy to do, look at clojure react wrappers.
OCaml is quite fast, and I guess it could produce faster code than C++ on some occasions, but it's definitely SLOWER than C++ for most benchmarks, e.g.:
That's not a very strong point -those benchmark sources usually look like "C in language X" and not like "idiomatic use of language X" - I'd bet they don't use immutable data structures and hand optimize code avoiding common patterns in functional programming for perf.
Which is my point - there are languages that can compile to similar code as C/C++ if you write similar code - but the idiomatic code is still slower because it doesn't map to hardware cleanly and have inherent overhead, this is why functional languages didn't catch on when we were heavily CPU bound, it was common for people to write ASM because they couldn't get good enough perf out of C compilers.
Now that we transitioned from vertical to horizontal scaling problem domain maps very nicely to functional languages (actor model/message passing style communication is practically transparent with immutable data function calls).
I did look at the source and it seemed that way (imperative code using flat arrays) but I don't know enough OCaml to say with certainty.
I have worked with other functional languages (Clojure) and I have optimized GCed code and it mostly boils down to what I said - writing C/C++ like code in language X to get as close to hardware memory model (avoiding GC by pooling, increasing cache coherence, using value types or manually unfolding data structs, etc.)
I haven't seen a magical way for high level data structures based on trees (used for persistant strucutres) to outperform arrays
No, that won't happen, but a lot of the code that you write in a functional language can be optimized far better than another language, because the compiler has more, stronger guarantees about your code.
There's some truth to this, but "can be optimized" is not the same as "there exists a compiler that optimizes".
Your statement gets repeated a lot (and similarly about the JVM: it has run-time information that compilers don't have, so can optimize better), and yet C++ and Fortran continue to come out on top.
Indeed, because C++ and Fortran get the money and time, and they're closer to the metal to begin with. Also, C++'s "Nasal Demons on UB" problem helps a lot.
But yes, OCaml is quite fast, and probably could be faster. Does it really beat C++ in all contexts? Probably not, but it's still pretty fast.
This is a good point. Functional is a really useful model for mostly stateless data processing. When you have lots of state that needs to be mutable, not so much.
Even state is easier when modeled purely though imo. Especially once you have to backtrack and rollback state. If you use pure state, it's trivial. If you don't, gl!
Conceptually, components have state, but it's all managed by react, so you only ever write pure functions. Granted, depending on how you update state in event handlers and lifecycle methods, you will need to think about state a lot, but this is an unavoidable fact of UI programming.
> every component must be created by extending a base class
This is an implementation detail, instead of ES6 classes you can also use React.createClass(), which might as well be named React.createComponent(). These "classes" can't be inherited from.
Conceptually is nice. All react code I've seen or wrote of even the most trivial systems have components (and more then a few) that override lifecycle methods - which is by almost definition OO's polymorphism.
Buy even if you could write every component with the stateless style it doesn't change the fact that your framework/GUI-library is using OO extensively and in it's fundamental concepts, I think that makes it OO, you seem to disagree.
Also, now in react (as of 0.14 with stateless function components) you can define components as pure functions that return a jsx dta structure to the render function.
IE (coded by memory quickly syntax may be wonky):
const test_component = () => {
return <H1>Blah!</H1>;
}
My impression of react is that it's very object oriented. Defining reusable, stateful components is classic OO design. Now, I don't have much experience with React so maybe someone can explain why it should be considered "functional".
React is an interesting mix of functional and OO. It's OO, in that the primary approach for defining components is class-based, and components have state and lifecycles. It's functional, in that the render methods are expected to be pure functions based on component state and props, and simply output a description of what the UI should look like as a result. Also, as a whole, React definitely pushes you to view the system in terms of composition, state transformations, and pure functions, rather than imperative "toggle this, add that, update the other thing".
Not sure why people have that impression. It doesn't fundamentally do any OO things. You don't inherit from the react components. Maybe you could make an argument for encapsulation but even that is shaky in JavaScript. Most it only uses prototypes as object for namespacing and to organize the api better since JS lacks good namespacing syntax.
It depends on how you use React.
You can use it in a stateful, OO way. Or you can use it in a more functional way.
A good example of how to use React in a functional manner is the Redux framework (basically you don't use the stateful parts of React and treat your components as a tree of pure render functions). http://redux.js.org/
#SpotOn. Tools should match problems. But try telling that to a ego driven programmer. For most, they have a hammer and everything is a nail. Is it any wonder the number of IT projects that go sideways or full on tits up really hasn't changed?
I don't have any idea what you're talking about when you say 'functional came before OO'.
I'm guessing you are talking about Lisp, but not only are Common Lisp (and the languages it evolved from) not at all functional in any technical sense of the word, many of them only had dynamic scoping, and often used mutation of data structures implicitly.
Stop talking nonsense.
EDIT: If you are talking about scheme, then the same exact points apply. On top of all of the above points, nobody in the world other than a few extreme language-philes ever use any of the functional languages, and for good reason. Functional programming is nonsense, the only people that talk about it are academics and people who want to seem smart. Functional != Structured Programming, Functional != Procedural, 'functional' has a precise meaning: treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. Neither Schema nor Lisp nor any but a very few quite obscure languages satisfy this.
>I don't have any idea what you're talking about when you say 'functional came before OO'.
Lambda calculus is technically the first functional language, and predates computers. But if we're talking about actual programming languages, SASL, one of the earliest functional languages actually implemented, was released the same year as Smalltalk, and was influenced by the older ISWIM, which was never implemented.
>I'm guessing you are talking about Lisp, but not only are Common Lisp (and the languages it evolved from) not at all functional in any technical sense of the word.
The functional style (writing most of your code as pure functions, keeping impure code isolated), however, can be practiced in ANY language. It makes your systems easier to reason about, because you know if state is being modified. And Lisps had many constructs that made FP convenient. Like map. So while Lisp, on the whole, wasn't functional, it was one of the easiest languages to write functional code in for some time.
>On top of all of the above points, nobody in the world other than a few extreme language-philes ever use any of the functional languages, and for good reason.
On the contrary, FP has heavy use in finance and other places where there is a low tolerance for error, as well as Microsoft and many other conpanies. It has heavy use in programming language research, and a functional programming language was used for program analysis to aid in the optimization of FFTW, which is still one of the fastest implementations of the Fast Fourier Transform.
>Stop talking nonsense.
As the above demonstrates, you're the only one who's doing that.
The lambda calculus was also the first OO language. (William Cook pointed this out in http://web.engr.oregonstate.edu/~walkiner/teaching/cs583-sp1... and I'd noticed the same thing. I just want to bring in a neat observation here, not fight for either side of the OO/FP culture war. I wish we humans wouldn't do that.)
Not really? The claim that functional programming predates OO would require that functional programming languages existed before OO languages, which is clearly false. If anything they were both developed at approximately the exact same time, which has more to do with hardware coming into existence and wider use that could support higher level languages.
ISWIM did exist, even if the compiler wasn't there. The lisp community did a lot of work with FP-style code, even if the language as a whole wasn't functional: the idea was still there.
But your main point was that FP was useless, deriding it, and claiming that, "only people that talk about it are academics and people who want to seem smart," which is demonstrably not true.
Where are the software projects that are built using functional languages?
I'll call out XMonad. What can you come up with? Obscure companies doing obscure things, or else people using Common Lisp in a nonfunctional way.
What major project or product is built using Haskel or Ocaml? There aren't any! The only thing I have ever used that were written in those languages are XMonad and csvtool respectively.
map, filter, etc are not 'functional'. 'functional' doesn't mean 'functions'. It is all about side effects, and that the procedures in the language are like mathematical functions: without state or side effects.
Using your definition, virtually every language in use today is 'functional', all the way down to Fortran 77, since really any language you can get a pointer or other handle to a function or callable object would let you implement map, filter, etc.
Really the existence of map, filter, etc are not a clue to a language being 'functional'. Yes, you would need such things, but they are not sufficient. C is not a 'functional language', even though technically you can write C that is completely functional in style.
In fact, almost any modern language can be used in a 100% functional style, avoiding side effects and state completely. Nobody does this.
Even if it is one of your favorite things, virtually no one uses functional languages. This isn't because people are dumb, it's because functional languages are cumbersome and inefficient.
I would not think passing procedure pointers in C makes it able to support functional programming. That's very shallow.
C lacks (last I looked) nested functions, closures, functions as first class data types, ...
> Really the existence of map, filter, etc are not a clue to a language being 'functional'.
'Functional Programming' isn't black and white. There are pure languages which ENFORCE functional programming like Haskell. It's their many paradigm, the language and its library make heavy use of FP and one has to actively manage side-effects.
Then there are language which SUPPORT some forms of functional programming, for example by providing a subset in the language and its libraries which can be used side-effect free, makes use of higher-order functions, etc.
At the bottom are languages which ENABLE some basic functional programming idioms. Though most of the language and its libraries are not using FP, the user can still use some FP constructs like nested functions, closures, etc.
Generally I agree, that pure FP is not that much used in the real world because of a lot actual problems (difficult to learn/use/maintain, efficiency might be achievable by experts, needs understanding of slightly advanced mathematical concepts, ...). There are some more or less used implementation languages of more or less pure FP (GHC, OCAML, F#, ..., Clojure) languages. I'd guess that the main applications are in domains where they have highly skilled people: finance, data analytics, verification/specification, ...
I sometimes see that companies advertise use of FP languages, but in reality that's marketing in recruiting to look cool and to attract clever people (who then have to work on Java jobs, mostly).
I wasn't making the point that map and filter were functional, I was trying to say that those were concepts originating in FP. And that they appear in so many languages demonstrates some of FP's influence.
It took people quite a while to figure out how to compile functional code (especially lazy functional code) to commodity hardware. The research only caught up in the 80s.
Pre Scriptum: For the GUI tcl/tk made the root of all OOP model and no one has been innovating since. Just copying.
But, it is not whether a paradigm/language is better than another one.
It is about getting shits done. The paradigm/language war/agile are cargo cult science. It is transforming a practice into a religious thinking while it is not the case.
All the list he makes about OOP is right, on the other hand, you can admit that objects as a namespace can be a right thing.
Like ... well the stringIO, the string and the file or IO objects. But, the data encapsulated these way have finite well known states.
I had a lot of pleasure using OOP in Perl, python, PHP because it was easy to remember where things were for some stuffs, and error/resource handling can be done in a nifty way, even though trying to make stuff optimized for speed or memory required to dive into the guts of the implementations or counter-intuitive so called intuitive ways of doing stuffs.
I guess the rant can be made about making leaky stateful objects that are coupled. Which is what OOP has been advocated for years now. With every dimension with finite states you couple, your cardinality is the cross product of every dimension and it grows more than linearly, defying very soon the brain cognitive power. But still, people insists on going for more complexity because their «tool» can handle it. The limit is NOT the language. It is the human brain. But the hubris is there.
Is Bill built from an Invoice itself built from a Quotation, or are these simple transition applied to a document through the use of functions?
The OOP approach may seem simpler for «encapsulation» but as soon as you add the polymorphism to add the payment gateway and the serialization of data ... and have to make it straight with book-keeping while no developers actually understand accounting, this becomes hell to synchronize what should be plain documents that are worthy with objects. Because ... coders are lost in complexity and lose focus about the real world application, and that what they think may be wrong. Very few coders I know check that their model fit the reality : they make assumptions.
The Quote/Bill/Invoice is basically no better as a poor document in ascii you can read and is self descriptive with stamps giving the stage.
The problem of OOP is documents are «abstracted» as a view on multiple states that are poorly modeled because people are overthinking the implementation and losing the goal : to put $ in the bank and make your accounting/billing/commercial services working. Very soon, the document is very dependent from a database, its view, its controler ....
And, while most boss are kind of embezzling. They don't say it this way, they say they want something flexible that can be easily changed. To make an innovation. All our IT is about subtly walking around rules or regulations.
Coder are over-specialized in coding, while they should first learn the craft they are modeling.
Other times, bosses are overgeneralizing expecting rules (like accounting) to stay the same across country. Well coders being believers they have bugs, not because their code is complex, sometimes because the real world accept no possible automation of tasks.
The problem sometimes do not lie in the paradigm or the language but in the excess of confidence people have.
I'm pretty familiar with Tcl/Tk but not entirely clear what you mean about Tcl/Tk and OOP. It seemed like you said the Tk container hierarchy was the model that OOP platforms copied. I'd agree that Tk's widget model is very useful as a way to build GUIs, but it doesn't have the abstraction features found in Java, etc.
Tk could be seen as a predecessor of DOM and CSS (as widget-building paradigm), possibly other widget libraries like GTK as well. In Tcl, OOP has evolved along other, if overlapping lines, as can be seen in the core oo::* namespace.
Advantages and disadvantages of various OO systems for Tcl have been discussed for a couple of decades. Issues around all of the aspects of OOP reviewed in the article were evident and the focus of disagreements, which showed the limitations of any particular approach and, as usual, that no free lunch or perfect answer can ever be found.
I agree with you. Moreover, I think anyone who is rabidly attaching themselves to a programming paradigm of any kind is doing themselves (and anyone who has to work with them) a disservice. It's all just tools. Committing yourself fully to a single tool like functional or OO or whatever is just as silly as a woodworker attaching themselves to only a single form of joinery.
Of course limiting oneself to one thing gives less options than two things, but this "tool" analogy (that keeps coming up) is not fair. Better would be to compare to the woodworking shop itself, where there are many tools that together solve a wide class of problems. You can run multiple shops if you like, with varying combinations of tools, and large overlap in the problems they can solve, but the overhead involved is not the same as choosing a screwdriver to screw and a hammer to smash.
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
OO is just a way of organizing code. You can simulate quite a bit of it in non-OO languages. But a lot of the problems are universal.
OO lets you abstract away a lot of detail, but locks you into some rigidity that doesn't map perfectly onto the real world. It's a leaky abstraction. But so is _everything_ real that we attempt to represent in a computer or in any formal system. Gödel proved this 85 years ago.
Code reuse is entirely possible with OO. The practical difficulties of code dependency management are not unique to OO. Anyone who's ever developed anything non-trivial in Node has seen how insane the dependency tree can get. Every language and platform has its own version of this problem and its own solution. From Windows DLL hell to Ubuntu Snap, from Bundler to Virtualenv, this problem transcends any particular style of programming.
It's good the author is skeptical of the promises of functional programming, but the total rejection of OO concepts as useless reveals that ultimately the author didn't really learn anything useful. The author fails to address how abandoning OO solves any of the problems he claims to have. "Ew, that's gross!" is not a useful analysis.
For some odd cosmic anomaly, I learned programming almost exclusively in functional programming environments. My first language was R, and subsequently learned Scheme, Clojure, Ocaml, Haskell, and currently program primarily in Scala. Having never gone a through the OOP trend, and realizing that my current programming experience happened to be de jour gave me some undeserved confidence. So much so that I would regularly make fun of all of the Java drones at my work for their insistence on using such an inferior paradigm.
Then due to some directions I was taking at my job, it became very valuable to run millions of simulations of warehouse and transportation operations. After months of pain, I discovered object oriented programming (luckily I didn't have to abandon my language of choice to get it). Comparatively speaking, there wasn't a functional design pattern I could find that could come anywhere close to the simple elegance of OOP for modeling people, vehicles, warehouses, etc.
It's almost as if different ideas have different virtues in different domains.
> it became very valuable to run millions of simulations of warehouse and transportation operations. After months of pain, I discovered object oriented programming
That's interesting. I think simulation specifically is an area that fits OO too well. It even fits the classic texbook example of a "Car is a vehicle, which enapsulates and engine. Its state has speed and position. etc".
Back in the day, when OO languages were beginning to be promoted in the late 80's the intro always began with a reference to Sumula 67 : https://en.wikipedia.org/wiki/Simula
My biggest gripe with OOP is the Oriented part. If you design your entire codebase around OOP you will run into architectural problems. Especially with so-called Cross Cutting Concerns[0]. The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out. I have heard this approach being called "Compression Oriented Programming", but I don't care much for what people want to call it.
This approach doesn't mean no objects ever. But only when your problem actually calls for it. Likewise you will also end up with parts that are purely functional, data-oriented, etc. But they will be used where they make sense.
On top of that I'm also using pure C99. It does away with a lot of the fluff and cruft in other languages. In the past I used to try to fit my problems into whatever the most fancy language features I was offered. Which cost me a lot of time analysing. Now I just solve my problem.
Mind you, C is not a perfect language. There are features I wish it had. But for my approach to programming it is the most sensible to use. Apart from maybe a limited subset of C++ (such as function overloading and operator overloading for some math)
> The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out.
That is the same technique that Stephanov describes in 'From Mathematics to Generic Programming'.
Most of the problems he brings up are already addressed in major OOP languages.
1) Inheritance can be confusing and messy.
Yes, hence the advice: Prefer composition over inheritance. Instead of having B inherit from A, declare an interface I, and have both A and B implement I. If B wants to reuse A's functionality, it's free to do so through composition, and not through inheritance.
There are some edge cases where inheritance is vastly simpler than composition - mostly when the interface requires you to implement 20 different methods, and there's only 1 method that you really care about changing. Using inheritance here gets rid of a ton of boilerplate, but that's a conscious choice you're making. If you don't like this, just revert to using composition.
2) Encapsulations can leak if you write buggy code
Any program can break if you write buggy code. Not sure what the author's point here is. In order to encapsulate your class carefully, either accept immutable inputs, or make deep copies of them. If neither happens to work, warn users that class behavior is undefined if they misuse it. This is what every non-thread-safe class already does anyway: it warns users that if you use them in a concurrent manner, things may break.
More importantly, when dealing with internal state that's created by the class, make it private and ensure no one else can access it. This also serves to encapsulate the internal implementation and algorithm from external users.
3) Polymorphism is... not unique to OOP languages?
Yes, using interface-based polymorphism is a good idea, and covers most of what people need. How does this make the argument that we should never use OOP languages?
--------
The author brings up valid points about what to watch out for when coding in OOP. If you read other books like "Effective Java," they bring up the same points as well. But instead of acknowledging the benefits that come with OOP as well, and teaching people how to avoid these pitfalls and write code the right way, the author jumps to an extreme position that OOP languages should be abandoned entirely. Can we please avoid this type of wild overreaction, and pointless jumping from one shiny tool to the next, in a never-ending search for a silver bullet that will solve all of our problems. Because let's face facts: No such silver bullet exists.
The argument about encapsulation was the most confusing for me. I do not see where the problem is in the example he gives: either you want an object of class A to be used by multiple objects (of different classes possibly), and we have no problem, or you want every instance of A to be used by one and only one instance of some class B, and in that case like you said either you make a deep copy / create the object inside the constructor instead of passing it around. (if no state is needed)
I guess he does make a point about inheritance and how confusing it can be, but like you said OOP still has value outside of inheritance.
Soo..We said never look for something better than Java? Or OOP? What if FP gives us all the important things that OOP does and more? Why wouldn't we use it?
Personally I want to learn from and use a language that supports as many of the paradigms as possible, like Scala or Swift. Let me choose based on what I need. That being said I'd much rather work in pure FP then OOP because of the fact that most of the advantages of OOP can be achieved via FP and the inverse is not true.
In other literature the answer to inheiratance is "composition" or "components" rather than "delegate and contain." A nitpick, but I think it better captures the meaning of the method.
Bob Nystrom wrote a very good chapter on composition in his Game Programming Patterns book [1] and is worth reading if you want to program in the OO paradigm.
Interesting. I almost though this was going to be an advertisement for Swift, since I saw this exact argument in a WWDC talk.
Apple calls Swift a "protocol-oriented" programming language, and with the addition of first class value types, tries to solve these problems in their own way.
I'd definitely suggest people frustrated by the problems outlined in this post to check out the Apple talk on protocol-oriented programming in Swift.
It's a great video, and clearly the design behind Swift has tried to address many of the biggest problems with modern OOP. But Swift is by necessity a multi-paradigm language that has to interface with existing OOP code.
If you're writing a Mac or iOS app, you're generally writing Cocoa with either Swift or ObjC syntax. Cocoa is unbearably stateful. For a simple app, you not only don't get to work with value types, you typically don't even get functions or methods which return anything.
Obviously for something more complex, your non-UI code can absolutely be written with the principles described in that video in mind, but if you try to apply those principles to Cocoa APIs directly, you're going to be fighting against its stateful nature constantly.
In ObjC everything is a "reference type" since it's a pointer, but most of the data structures are immutable, and some of them are stuffed into tagged pointers, so they really are values after all. (Try implementing that in C++!)
CoreAnimation and bindings make AppKit very slightly more functional, but not really.
Even before Swit apple advocated composition over inheritance.
Alas, when teaching programming and OOP somehow inheritance comes up as the most important and interesting part, so few learn proper architecture and a doomed to write articles like this.
In 2016 we are still talking about Cobol, which is spread in a relatively niche market and considered as a pillar in fields like banking, how can the object oriented paradigm be considered "
past or even bad? It is the present and will be the future for at least the next 20 years, considering the number of billions lines of code. From a management perspective, such statements are not strong enough to be justified.
I find this sort of articles to be just bread and butter for codemonkeys, people who learn the most recent paradigm, technology or whatsoever and think that it's the key to happiness, or people who read for the first time a book like the ones from Bob Martin and feel they already know how to develop good software - or poems, as mentioned somewhere in the book - and list the bad things about other types of software architecture or design or whatever.
While I think that you are definitely right and that nowadays Cobol would probably not be the first choice for 98% companies out there, I think we should also consider that according to TIOBE ( http://www.tiobe.com/tiobe_index ) Cobol is one of the 20 most used languages in the world - and it's also thanks to it that we can use banks efficiently. Can we all say goodbye to Cobol? Go and ask those ones who program in such language - who make a s* load of money with it.
I personally could say goodbye to Assembly, as I only used it for learning purposes, I could say goodbye to Perl, because I don't use it and I don't like it, or I could say goodbye to Visual Basic - for other reasons. All these languages are extremely powerful, they still do their job today, although some are too low-level for our everyday applications, some are just in a process of replacement (like Cobol, of course). However, I tend to keep these opinions for myself.
"Saying goodbye" in such cases is a strong statement, inherently sensationalist and biased, therefore my criticism toward the title. In a field like computer science, you can't and shouldn't use such titles. Such a title shows already how biased you (the author) are.
This is just a rant. It's not about Object Oriented vs Functional. Perhaps it could have been if had said how functional programming help these issues.
The summary of the article is programming is nuanced. You can attribute some nuances to OO design.
Let's wait for a few years and we'll see plenty of articles "Goodbye functional programming". You can write good and bad stuff with OOP, you can do the same with FP. There is no one-size-fits-all porgramming style.
You're making a false equivalence. Functional programming has a strong basis in discrete mathematics that OOP simply does not. This makes it better suited to accurately describing computation at a high level than OOP.
However, you probably arrived at this conclusion because you still see programming languages as having intrinsic paradigms and those paradigms meaning anything about computation.
The fact of the matter is that all programming languages are little more than notation systems for algorithms. The best programming language is the most accurate algorithmic notation system.
You mean lambda calculus? That model has plenty of shortcomings. Someone recently pinpointed the problem for me. Complexity analysis is impossible in a system that is inherently unaware of time and computational cost of transition rules. Lambda calculus is inherently timeless (both in the theoretical sense because of turing equivalence and the practical sense of being unable to provide a proper framework for complexity analysis). See http://cstheory.stackexchange.com/questions/376/using-lambda....
So you list one shortcoming, and don't even cover the position that lambda calculus is better suited as a foundation for a programming language despite not being better at time complexity than a Turing machine would be?
This algebraic view of computation relates naturally to programming languages
used in practise, and much language development can be understood as the search
for, and investigation of novel program composition operators.
This is the usual response. I didn't say the algebraic approach does not have advantages but the complexity analysis issue is always swept under the rug by proponents of the algebraic approach. Operational and axiomatic semantics also have their place and if the theoreticians haven't yet settled on "the one true way" why is it that regular programmers think they have?
I agree that the search for novel composition mechanisms is a prominent goal of PL theory and research but it is not the only one.
Fair enough. I'll agree that their is no "one true way" and contest whether it's likely there is one, however I do think I understand the frustration some theoreticians have with the mainstream of programming when their contributions to programming language theory through denotational semantics, type theory, monads, etc have been largely ignored for the close to thirty years in favor of recycling the same tired dialects of Algol over and over again.
If you want a foundation for computing, may I humbly suggest imperative algorithms in the integer RAM model. That gives correct answers to questions like "what's the time and space complexity of quicksort in the average and worst case?" Good luck answering that with lambda calculus, or Turing machines for that matter.
The RAM model is nice for some simple space and runtime complexity analysis. Of course, it doesn't know about eg caches or parallelism, either.
(Lambda calculus is an interesting foundation mostly for work on correctness of algorithms---indeed analysing cost of computations is harder. Chris Okasaki has done a good case study of analysing runtimes of purely functional algorithms / data structures.)
If you are happy to prove correctness of tree-sort, which does the same comparisons as quicksort, it's straight-forward.
If you want to talk about the clever in-place quicksort, you'll want to talk about a slightly more complicated model.
Either implicitly, where you still model everything as lambdas, or explicitly: you can model state (like an array) and its manipulation inside a lambda calculus framework just fine. Eg using linear typing, or something like Haskell's state monad (which has a pure description, but ghc is smart enough to optimize it to what you would expect when translating to machine code).
So that would be the traditional analysis of quicksort, mucked up with linear logic terminology so you can't see what's going on.
The sweet spot of lambda calculus is structurally recursive algorithms on trees. You can use it for other kinds of algorithms, but there it just makes things harder, and I've never seen it solve a problem that wasn't solved first by other means.
I came to this conclusion because as soon as a paradigm becomes popular clueless people who don't really understand the foundation will start using it. Then you have consultants popping up who don't know much either but come up with slick patterns to sell. This happened with OOP and will happen with FP.
Functional programming is not that easy to do right in the real world. It takes some level of abstract thinking and understanding. Not too many people in this industry really want to do this level of thinking. They just want a recipe for getting things done with the least amount of thinking.
Yes, but the foundation under OOP is considerably less solid. I agree that you can write a bad program in any language, but the fact remains that it is impossible to write certain bugs in statically typed pure functional languages and that functions as a base abstraction are more apt for describing computation than objects are.
Totally agree. But don't forget that the real-world is pretty messy and a lot of people are not willing to think deeply about a problem. They want quick solutions. Out of the people I work with I would trust only a small percentage took the time to understand OOP. Only a few guys will take the time to understand FP. The other people pretty much just copy/paste boiler plate code.
Wow, that's depressing. I haven't had a lot of experience working with other programmers but if this is what the software industry looks like I can understand why you'd have these opinions.
> The best programming language is the most accurate algorithmic notation system.
Actually, the best programming language is the one you're programming in right now. /s
Although my comment is in jest, I do think it has a lot of truth; I'm never going to write something for HPC in Haskell just because it has better algorithmic notation, I'd most likely choose Fortran or C. I'm not going to write a web back end in Haskell, I'm going to write it in Python, Ruby, Javascript, or any other language much better suited for the application. I'm not going to write statistical model analysis in Haskell, I'm going to write it in R, Python, Matlab, or Mathematica.
Saying that all programming languages are just little more than notation systems for algorithms doesn't actually reflect reality that programming languages are not just math operations, even if that's what they become in the compiler or to the CPU, and telling people that is disrespecting the entire industry and academic side of programming and computer science.
Haskell is equally suited to backend web development as Python, Ruby, or JavaScript. Just look at Yesod. There's nothing about Haskell that makes it worse for web development, and a lot that makes it better.
For what it's worth, Haskell has uses in finance for statistical modeling because you can model different currencies and equities/derivatives with ADTs and the type system will ensure you never confuse them in your code. Haskell syntax is beautiful and in many cases mimics mathematical syntax and as such is used frequently by mathematicians.
But I don't mean this to be a defense of Haskell. There are many posts that do it better, just yesterday there was a blog post on the front page about one startup's use of Haskell in its main product: http://baatz.io/posts/haskell-in-a-startup/
There are other languages like Lisp that directly imitate lambda calculus. Lisp is elegant because it is like programming in a syntax tree with the entire structure of the tree available for manipulation such that you can create extremely concise and expressive programs with less code.
Funny how some people believe software programming is one big problem to solve as a whole, rather than a craft. OO is a one tool in your toolbox. A good craftman doesn't use one tool, he knows what tool to use for which work.
Inheritance is overused in OOP. There are many ways to share object behaviors, inheritance only works well when you expect all objects of both classes to share all behavior except one or two things. Even then, you should investigate dependency injection before reaching for inheritance.
For the example given for the Triangle Problem, the author isn't clear about exactly what behavior is being shared among the classes. The top of the tree, PoweredDevice, gives an indication, but my guess is that there are more responsibilities than just power, these responsibilities aren't being reflected in the domain model as they should be.
Instances of a class share behavior with other instances, it is the state that differs, i.e. the data being stored in the instance variables. In the example hierarchy, the state being stored is left out of the analysis, but it's the first place I would look for a missing domain concept. My guess would be that the most concrete class is going to be models of consumer peripherals, of which instances are intended to represent actual devices.
In this case a copier, which contains both a scanner and a printer, but not an actual discernible model of scanner or printer, would simply inherit from PoweredDevice. That it has this functionality does not mean it need actually have those in its class hierarchy. It is a job better suited for mixins, or injected dependencies.
From a purist view one could argue that inheritance doesn't belong in OO in the first place. Alan Kay's first descriptions of Smalltalk & OOP did not include inheritance concepts.
class A {
public int field1;
public void method1() {
System.out.println("A");
}
public void method2() {
System.out.println("A");
}
}
class B extends A {
public int field2;
public void method1() {
System.out.println("B");
}
}
is basically equivalent to
interface C {
public void method1() {}
public void method2() {}
}
class A implements C {
public int field1;
public void method1() {
System.out.println("A");
}
public void method2() {
System.out.println("A");
}
}
class B implements C {
public A a;
public int field2;
public void method1() {
System.out.println("B");
}
public void method2() {
a.method();
}
}
As you can see both are basically the same except one little detail. The benefit is that you don't have to write the redundant "method2" method in class B. So the only time inheritance is ever useful is when you don't want to override all methods. Now someone tells you to model everything in terms of class hierarchies even when they don't need it right now and you've negated the tiny little benefit it ever had which means not having it in the programming language actually has positive consequences.
Kay has also suggested that late binding or dynamic dispatch is a massive deal, and in most statically typed oop languages, dynamic dispatch is typed according to the inheritance hierarchy.
The upshot is that to teach OOP in Java, say, you need to talk about significant parts of the inheritance machinery just to get dynamic dispatch, perhaps later cautioning against implementation inheritance and even interface inheritance.
In Smalltalk, you can talk dynamic dispatch without messing around with inheritance at all.
For a statically typed language where dynamic dispatch is free from inheritance graphs (sometimes described by saying that subtyping is not inheritance) see Ocaml's structural subtyping via row polymorphic records (bit of a mouthful --- but I need to differentiate from Ocaml's module system which is structurally subtyped and supports inheritance!)
Or an IScannable and IPrintable interface (that is not the correct english word for 'something that can scan'..), and a Scanner implementing IScannable and deriving from PoweredDevice. And a copier deriving from PoweredDevice implementing both interfaces.
I find many of the objects in .NET very useful and use them in my code.
Also in my code I define and use some classes.
I like the idea of classes. E.g., in my Web pages, I have a class for the user's state. When a new user
connects, I allocate an instance of that class. Then I send that instance to my session state store server. To do that, I serialize the class to a byte array and then send the byte array via TCP/IP. The session state store server receives the byte array and deserializes it back to an instance of the class and stores it in an instance of a collection class. Works great. It's really convenient
to have all the user's state in just one instance of one class. Terrific.
Encapsulation? I don't know what the OO principles say about encapsulation, but it looks useful to me as a source of scope of names and keeping separate any members in two different classes that are spelled the same. So, terrific: When I define a new class, I don't have to worry if the names of its members are also used elsewhere -- saved again by some scope of names rules.
Actually, I much prefer the scope of names rules in PL/I, but now something as good as PL/I is asking for too much!
But inheritance? Didn't think it made much sense and never tried to use it.
Polymorphism? Sure, just pass an entry variable much like I did in Fortran -- now we call that an interface. Okay. I do that occasionally, and it is good to have.
Otherwise I write procedural code, and the structure in my software is particular to the work of the software and not from OO.
I couldn't imagine doing anything else.
I've seen rule-based programming, logic programming, OO programming, frame-based programming, etc., but what continues to make sense to me is procedural programming with structure appropriate to the work being done. E.g., the structure in a piece of woodworking is different from that in metal working, residential construction, office construction, etc.
> But inheritance? Didn't think it made much sense and never tried to use it.
This is interesting to read. When OO was popular inheritance was at the top of the list as the OO killer feature. "your 'Manager' class is just an employee but with 3 extra methods, so you save so much copy and paste, etc etc".
I believed it at some point. Then perhaps 15 years later, after everyone has been bit by deep, confusing inheritance they had to maintain and debug, inhertance is the devil.
It is just interesting to observe how once a selling point of OO is now a big giant warning to stay away from.
Re: classes, sounds more like you're describing structures. This is not really specific to OO, and it's available in FP languages (e.g. records in Haskell). And these provide scoping, which isn't really encapsulation, as far as I understand it.
Yes, I use OO classes much like I used to use PL/I structures which, of course, are generalizations of C and Cobol structures. And classes have some additional advantages, e.g., can do serialization.
And, what I really like is the name scoping.
For what else is significantly different in encapsulation seems a little obscure. Maybe the biggie difference is not only have the data with name scoping, but also have the code, the methods, stuffed inside the class and making changes in the data in the properties of the class. Okay -- so get to use the class and its methods without keeping track of subroutines/functions separately. Okay.
Get two kinds of essentially nesteddescendancy, static (from the source code) and dynamic from execution as make calls but from which have not yet returned. It's good to think about both at the same time. The push down stack of dynamic descendancy defines what parts of the code are active and, thus, can be called. E.g., the names currently known by inheritance are from the dynamic descendancy working through the static descendancy. And, then, can call code back in dynamic descendancy that is still known from the static descendancy.
Wow! It's simpler than that: Can have BEGIN-END as a chunk of code with its own names. Any name used inside there but not declared there is inherited from the code that is active (in dynamic descendancy) and in the static descendancy.
E.g., can have ON CONDITION X; BEGIN; ... END; and execute that. That execution just says that, in the future, if see RAISE X, then execute the code in the BEGIN-END. Then that code can do GOTO Y where Y is a label known in the BEGIN-END block and in code in the dynamic descendancy. If the code of Y is 11 steps back in the stack from the RAISE, then all the 10 lower levels of code are exited. Lots of cleanup happens automatically.
So, the ON CONDITION is executing code back in the stack of dynamic descendancy -- darned cute. And can do such things quite generally.
So, code block A can have an internal subroutine B, call C, pass an entry variable to B, and the code of C can call B. B then can have values inherited from the code of A and use those values as B executes from the call of C. Did that once when scheduling the fleet for FedEx!
Once in some AI work at IBM's Watson lab, used a fancy case of that name scoping to save our project a few weeks of work and improved the quality of our product. Got an award for that!
That's a fast outline -- maybe not fully clear. Any questions?
At times I've thought that that little device (pattern?) was essentially a closure, but since I don't really know what a closure is I can't really say!
Uh, I've done a lot of programming in a lot of languages, but my main interest is not as a programmer but as a founder of a startup that needs some software!
He qoutes Joe Armstrong's criticism of OO but later Seif Haridi corrected him leading Armstrong to say:
"Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism."
I'm not sure that's a fair summary. Armstrong states that his opinions have changed over time, but mostly because his thesis supervisor pointed out Erlang is perhaps the only truly OO language out there.
After he quotes what Alan Kay says...
The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages.
Armstrong then says:
Erlang has got all these things. It's got isolation, it's got polymorphism and it's got pure messaging. From that point of view, we might say it's the only object oriented language and perhaps I was a bit premature in saying that object oriented languages are about. You can try it and see it for yourself.
yep... I was just dunking a hobnob while writing some good old imperative C. Did anybody notice that Vulkan, "the future" of graphics APIs, doesn't f*ck around with OO or Functional?
Well, that's because nobody except John Carmack advocates FP for the kind of fast code that complex graphics programming requires. And even he says that it should be used in moderation.
OO is treated almost like a religion by some people. It's useful to be able to create instances of some things but the place OO fails is the "oriented" part. Code is much easier to maintain and understand written in a functional state.
If something doesn't need to be an instance, it probably shouldn't be one.
This article articulates a lot of problems I've noticed in OO code, I think it would be foolish to ignore it. My life as a developer became 10 times easier once I realised some of these same pain points and pivoted, or maybe even more so.
In school I was taught all about OO coding practice and I think he's right, they were wrong.
> Code is much easier to maintain and understand written in a functional state.
Why is it then that programs written in OO outnumber FP software by 100000:1 or more.
For example most of the software written for iOS and macOS are written in C++, Objective C or Swift. All three are class-based object-oriented languages.
What you say may not be true or may not be very important.
Why is there so much COBOL or Visual Basic still around? Or tons of PHP and Javascript on the server? Just because there is lot of programs written in $x doesn't mean $x is better, more maintanable, more efficient, easier to understand. It could be, but it doesn't have to be.
That argument assumes the average developer, average university, average company can look around and is able to effectively pick, understand (this implies ability to learn) and evaluate merits of a framework or language. Most don't even have a choice. They are taught, or maint the code base or listen to what is told by management and that's their choice. Managers hire to the code base that's already there and to languages / frameworks they know. Developers who just want a job (nothing wrong with that) will pick and learn languages that managers hire for.
Declaring Functional programming as the rescue at the very end of the post is just not right. FP will gain you something in particular programming requirements while being just wrong in others.
Looking back now on 25 years in software development, plain old imperative programming still bought me the most in terms of getting stuff done (Banana problem). With a decent set of standardisation and sane language defaults a mostly imperative approach will get you very far.
Golang hits that sweet spot very decently for me. Missing type generalisations are an impediment from times to times though.
Thanks for writing up this. I work with OOP programmers a lot and I am tired of explaining problems with OOP over and over. This article just saves me that effort.
I love using object oriented design and find it quite odd when I meet seasoned programmers who still don't 'get it'. It feels a bit like meeting someone who says Obama was born in Kenya.
Here's a concrete example of object oriented design:
To understand the problem domain, go to https://whiteboardfox.com and click Start Drawing > Create Whiteboard, then draw something. Play around with different colours, erase some lines, try undo and redo, etc.
I honestly don't see how you could implement it without object oriented design. Surely it makes sense to have a Diagram class that encapsulates a list of strokes and pictures? Isn't it easier if the Diagram class exposes addStroke() and removeStroke() but does not reveal how it's implemented? And shouldn't I have a separate view class which encapsulates how much zoom and pan the user has applied to the diagram?
Could you implement Undo and Redo actions so neatly without a command pattern?
And isn't it lovely that the ViewController can switch between different modes (Pencil Mode, Eraser Mode, etc) without needing to know anything except a small interface that is common to all modes?
I actually get a little thrill when I think about how cleanly this design addresses the requirements. Could I get that feeling if this were implemented in a functional programming style?
> I honestly don't see how you could implement it without object oriented design. Surely it makes sense to have a Diagram class that encapsulates a list of strokes and pictures? Isn't it easier if the Diagram class exposes addStroke() and removeStroke() but does not reveal how it's implemented? And shouldn't I have a separate view class which encapsulates how much zoom and pan the user has applied to the diagram?
Data structures and operations on them are indeed a useful concept. Not limited to oop, though.
> Could you implement Undo and Redo actions so neatly without a command pattern?
Persistent data structures make it even easier. You just keep a list of references to the old states around.
> I actually get a little thrill when I think about how cleanly this design addresses the requirements. Could I get that feeling if this were implemented in a functional programming style?
I can't predict your feelings, but what you described could very well be done in eg Haskell. (You just wouldn't use oop classes, but you can abstract over similar concepts.)
Have a look at http://shaffner.us/cs/papers/tarpit.pdf for some thoughts on software design. (The paper advocates using relations. I have seen them work very, very well in functional settings. Just the opposite of ORMs.)
OO is one of those things best used in strict moderation. Unfortunately, most people lack moderation, and strive not to necessarily solve the issue, but to show everyone just how smart they are. As a result we get object hierarchies 10 layers deep, and 1000-line source files (or worse, dozens of 100-line source files) which don't do anything meaningful.
God damn it I begin to hate Medium. Just another Bullshit article. When I read those dipsts description: "Software Engineer and Architect, Teacher, Writer, Filmmaker, Photographer, Artist…" Great. And you want to tell me that OO is dead and functional the only future? Fuck off.
Aside from the quality of the submitted article, this comment does not contribute to the discussion and invites a trail of predictably off-topic replies. Please comment civilly and substantively on HN, or just not at all.
The self-descriptions people use these days are so ludicrous it's hard to tell if it's satire. If I had a Medium account, here's how my description would read:
"Software Engineer, Philanthropist, Astronaut, Shark Hunter, Breaker of Chains, Lord Commander of the Snack Bin, Protector of the Repo, and Part-time Cat Dad"
"Software Astronaut" maybe? I'd go for "Teacher" (just a little bit more gnomic than "Educator"), and put it before "Philanthropist", all segues nicely from "I'm a serious adidactic genius" to "but I've got a fun side"
Don't forget "Father of 5 Children and 8 Bastards, Environmental Activist, Feminist, Intellectual, Playing 10 Instruments and Speaking 20 Languages, More Wifes than King Solomon"
Put perhaps a little more coarsely than I would have, but I completely agree with the sentiment.
A[nother] minority view getting mainstream (for us) exposure because a few other dinosaurs and hipsters agree.
People need to chill out about this stuff. You don't like OOP? That's great but if it's alright with you, the rest of us are going to carry on with it while you fade into irrelevance.
Of course not everything fits an OOP flow but even Java (the very most verbose OOP IMO) will let you do things linearly or functionally if you want to.
I get your desire to deflate something you dislike. I know this calling-out of OOP, as in TFA, has become routine (e.g., Steve Yegge's posts were quite celebrated here, and that was years ago). Maybe you're tired of the banana quote, and thought this post brought nothing new.
But your comment is purely ad hominem -- to the point of calling the author a dipshit with absolutely no basis -- and does not engage TFA (breaking the first two comment guidelines at https://news.ycombinator.com/newsguidelines.html).
HN has lots of great articles published on lots of different platforms. For whatever reason - reasons I certainly can't explain - Medium just attracts all the junk.
begin? I've always hated it. I think it is called "medium" because it isn't well-done...
However, I don't think the style of this article is Medium's fault. I find it more and more common that I see a headline and have to skim through the whole article to find out what the point is. I don't know if it is from watching too many TED talks or just poor writing (not mutually exclusive, of course). But then I go read the comments and see tons of people commenting about how they loved the article and how well written it was so maybe it is me that is wrong.
On second though, no. It's the children who are wrong.
So basically, you've given zero actual counterarguments other than an ad hominem. Or let's go at it this way- How exactly did you expect someone to introduce themselves who felt they had something to say about OO in a negative light?
It's not just Medium. I get followers on Twitter who have that kind of bio, except that the bio words are separated by pipe signs. My bogofilter tells me that they are all likely part of a scam of the same group. Further confirmed by the content of their tweets, which follow a pattern.
Well, I don't know much about the photographer and artist tags, but he was credited as writer on a few movies.[0] The "teacher" bit may be because he ran a training session somewhere, or was a TA or tutor at some point.
Well he did leave an ellipsis at the end of the sentence, implying that the list isn't exhaustive. He could have many more grand titles and occupations. His headshot is also a Jobs-esque, black and white photograph of him wearing a black shirt. This must be satire, right?
What if every Java developer who discovered the immense cerebral gratification in Scala decided to write an article with the theme "Aww shucks... Frick You Java, I wasted so much time on you damn it !!! I am going to Scala and I am never coming back."
Also, the examples the author gives maybe weak. Inheritance breaks my code ? If it's code I don't own I use dependency management. If it's code within the same team then code review before commit ?
The reference owning example for encapsulation assumes references are globally held ?
(PS: Just using Java/Scala here but feel free to vote me down if the experience is different with other language pairs. Oh also that I am having dirty dreams of leaving Java and indulge in Scala's monads as I recently discovered I wasted time on Java)
The king is dead, long live the king! Thinking that a new framework/language/paradigm will solve all your problems is naive. The author should know that if they've truly been programming for decades, as stated in the article.
Who said anything about "solving all problems"? How about just making some things a bit better? I'm get really annoyed when I see the argument "Y does not solve all the problems in X, therefore there is no point in moving away from X."
C++ has already moved past OOP when it was standardized in the 90s by having a standard library built around regular types and generic programming. Here is Sean Parent's talk
'Inheritance Is The Base Class of Evil', which discusses some the same issues with OOP and the solution in C++:
I wonder if someone invented the "modular" programming yet.
Judging by the UNIX paradigm of the command line tools, the idea is clearly out there.
Instead of objects, do modules - things that do one thing, and carry minimal dependencies.
You need a banana? Grab the banana module. You need a banana with ice-cream center? Feed the "center" callback of the banana module with "ice-cream" instead of "banana intestines".
You need a copier? Grab both printer and scanner.
Is there any existing language that i'm describing now?
I've enjoyed OOP more than anything else. The real issue here is that these pillars are but low-level building blocks. To fully take advantage of the OOP paradigm, you need to take a look at DESIGN PATTERNS. They'll solve (almost) any issue mentioned here. That is, if you know how to apply them, the right way, at the right time (just like everything else in this damn world).
It seems to me that my OOP is so functional that I didn't felt these issues that bad (it is true that I actively evaded them with the design) and at the same time it sounds like falling into that is typical of not so great OOP programmers.
It's curious to see that OOP hate coming from someone that got a chance to work in Smalltalk.
So many of these essays feel academic. Why doesn't anybody ever talk about languages in terms of their abilities to satisfy business rules. There is a whole world (or valley) of MBAs shoehorning great developers into their ideas, why not optimize for the interfaces there?
Of course I may be dumb and what I'm asking for is ColdFusion or Business Objects or Salesforce or something.
The specific problem described in the "encapsulation" section is solved in modern C++ (11/14) by std::unique_ptr. While this may seem like a trivial quibble, I think it's part of why I find modern C++ quite tolerable despite disliking almost every other "object-oriented" language.
OO paradigms are not magical and they have a learning curve. They can look simple and obvious but knowing how to abstract your problems using these techniques is not simple and it's what differentiates a good programmer from a bad one.
It's easy to complain about them but in most cases I see it's a misuse issue.
I think that fundamentally OOP and FP are both necessary for any language that wants to run relatively close to the metal.
The reason is that a computer is fundamentally all about state and you need something to manage the that state. This is the antithesis of FP. OOP manages state somewhat nicer.
Actors (classes that call with true async messages, potentially over the network) composed of objects (classes that call with simple sync stack) which use pure functions whenever possible.
Explicitly handled state + mutations, polymorphism, implicit safe concurrency, fast local calls + opportunities to
optimize + safe and easy to reason about.
We keep getting caught in theoretical cesspits. Perhaps the way forward is to reduce our focus on philosophical discussions of programming paradigms, and to iteratively figure out, using well-defined metrics and outcomes, how best to develop software(and to define these in the first place). Taste, one-size-fits-all trends and hype are what drive the industry, and we tend to ignore, or hopelessly lament, the (unmeasured) waste that results from these.
And then, once we have hard data, we should have the courage to follow the data, even if it means throwing away our cherished pet paradigms and methodologies.
As someone who recently started learning FP in Haskell, I think one cannot look at individual parts and compare OO to FP. I find that while both have strengths and weaknesses, in FP the sum of parts is much greater to appreciate than in OO with comparable energy invested in them in problem areas where performance is critical, but not too critical.
That has been my cumulative verdict so far learning FP - perhaps this view would sway one way or other as I learn more about it
Fast reading through the article I was already prepared that OP would shift to FP. OP should assess his own fallacies and not blame imperfect concepts.
One can probably improve certain things switching paradigms, but we as humans fail at conception, communication and complexity (althought we can brute force the latter). There is no language that can solve this problems sanely and it is questionable that any can.
This person has been doing class oriented programming for years and calls it OO. He will try structured programming with recursion and call it FP now...
I find the `Printer + Scanner ~= Copier` example poorly designed.
Sure, the Copier has both Printer and Scanner, however, in practice, the "Start" function on a Copier differ from either -- it starts the scanner and forwards it to the printer. It might also print multiple copies.
Point being, the `start` functionality here differ from both Printer and Scanner hence, the `start` method shouldn't be inherited.
Scanner and Printer can be made interfaces, then Copier can hold reference to IScanner and IPrinter, it doesn't have to care about their concrete implementations, as long as it's something that has a scan() method and print() method, for all the copier cares it doesn't have to be a powered device at all, it could be a cloud printer and a scanner located 1000 miles away.
My experience with ReactJS has been the first time I felt I had the perfect balance of OOP and FP.
The components are so well defined as objects since they have the luxury of being tangible. But using them in a pure manner with zero local state makes them so easy to reason about and reuse.
More can be said about Redux but I'll leave it there.
You can create components from functions or as classes. I will create a button class, which has some methods for how it renders and updates itself and how a user interacts with it. I can then subclass it to make a toggle button or whatever else.
Say I have a form with 4 fields and a "Submit" button. The submit button's `onClick` tells its parent that it was clicked. The parent calls `getValue()` on all of its children (the form fields) and dispatches a `updateMyDatabase()` action.
My example may be controversial to some as my form fields have state. Some may say that on every keystroke, the form should have an action called that updates the `centralState.formValue`. But regardless of that, I think it's still evident how there's OOP involved?
a programming paradiam can be accepted massively not because people hate the predecessor, it's because the new one is more intuitive and useful. If you hate oop so much, then approve how it is counter-intuitive compare to fp. Keep complaining make you sounds too emotional, as a SE you should know how to objectively analysis.
FYI, in OOP paradiam all inherent, encapsulate and so concepts are for one goal – design a better interface, that also follow how the real world be designed, for example, power outlet at you home.
Class Copier
{
Scanner scanner;
Printer printer;
function start() {
printer.start();
}
}
---
Placing a Start() in a PoweredDevice base class doesn't make sense in the real world. There are plenty of "powered devices" that don't have start buttons. A phone, a fish tank pump, a smoke alarm, none have a "start." A powered device should have just that, a PowerOn() and PowerOff() or SetPower(bool isOn). I wouldn't even create a PoweredDevice base class unless you have a reason. This is the main fault in your design.
Scanner.Start() should return a byte[] which is the result of the scan: byte[] Scanner.Start(); A scanner is an input device.
Printer.Start() should take an argument of byte[] as to what it is to print: void Printer.Start(byte[] byteArr); A printer is an output device.
Having said that, your Copier class would look like this:
This can easily be enhanced to handle copy counts:
void Start()
{
byte[] document = scanner.Start();
for (int x = 0; x < copyCount; x++)
printer.Start(document);
}
Ideally you wouldn't even make an inheritable Start() method. The Scanner class would have a byte[] Scan() method and the Printer class would have a Print(byte[] byteArr) method. You're trying to ram a square peg into a round hole. Use inheritance when it is convenient and makes sense to do so. Don't force it. Think, what does a scanner and a printer have in common that works the same, then put that in your base class. A power button is about it.
A lot of inheritance is done backwards. You make your classes then find commonalities and put that in your base class. Only create a base class first if you've thought about your object model and you know the commonalities.
Also, there is no reason to make your inheritance chain deep, just because. Build your objects in a way that makes sense. Don't write code or base objects you will never use. You can always insert a class in the chain when necessary.
Mastering OOP is hard, and people who have mastered it get paid a lot of money for their skill. It took me a few years to really understand how to design with it. It's invaluable though. A good object model is a thing of beauty, and a hell of a lot of fun to design.
Edit: I don't know why the editor won't keep the CR's.
Object Oriented Programming simulates the restrained reasoning capacity of the real world. This is done by weaving state into every conceivable unit of computation. The result is a universal and inescapable notion of identity. It's a state conspiracy! Sometimes you are actually interacting with the real world and this is an appropriate constraint. That is only because, in the real real world, these things are pervasively intertwined. Right down to the smallest phenomena we've been able to observe. We can't actually take them apart except for in our minds. To do so is a very old idea, pervasively apparent in western thought, called platonic realism. I internalized it as an unknown known at some point. I imagine that's just how people did it before someone as smart as Plato was able to articulate it. It's sort of the doorway to abstract thought. Most mathematically inclined people have ventured into the depths of the world it conceals. It's necessary in order to properly understand the concept of a "value". When these people first start to program they rely heavily on expressions and functions. They tend to atomize complex values with simple structs. They don't know they're doing it but they're writing "functional" programs. It might be more apparent if we just called them mathematical or algebraic programs. They demonstrate a preference for referential transparency without knowing what it is. Much of their code is outright stateless. They're hesitant to use a "var" as anything but a "let". Many seem to immediately grasp the simplicity and generality of recursion. They have to have it pried away from them like it's a dangerous recreational drug. That recursion is not "optimal" is simply presented as an engineering reality. Always intent on incremental improvement they diligently internalize these "optimal" representations utilizing loops and state. They're tricked into feeling they've acquired a worthwhile skill; They don't know they're doing what a compiler ought to. They learn to reserve the truly optimal representations for their minds eye. With the desire to utilize their new "skill" they move towards external representations that could only be considered "optimal" by an unconscious machine. All of this damage is done in the earliest stages of learning; Probably before they've even attempted any significant programmatic interaction with the real world. That's when everything gets worse. They start trying to coordinate too much state and they can't cope. They're told they need these object things. Everything seems to get easier: Sockets, Widgets and even the Lists that had been such a struggle to use before. They choke down the declaration syntax and hastily strap their newfangled constructor and destructor gadgets onto their toolbelts. These are excellent tools for arbitrating the abstract world and the real one. The ability to hook into their creation and destruction provides abstract objects with a canonical state-of-existence. This is necessary to fully simulate the identity possessed by real objects. For the purposes that they've learned them, objects are immediately and overwhelmingly useful. They come to appreciate the clarity of the method invocation syntax for manipulating state. They're right to do so. The functional languages themselves even sort of "do" it. Tragically with their most fundamental notions of computation already brutally violated by the state conspiracy, they're vulnerable to seeing objects as a universal paradigm. Everything is an object. Everything. They ascribe pet-hood to their little objects and feel driven by the satisfaction of teaching them their own special tricks. Each and every one of them is an excessively black box. Some go so far as to make social-networks called UML diagrams to protect them from inappropriate "friends". They have forgotten the elegant abstract world that was left for them by the intellectual giants of history. They descended from it in pursuit of mere performance and are in serious danger of never returning. To act like it's just another way of looking at things is a brutal misunderstanding. It's a discipline that resides entirely within a much larger one that it is not a suitable replacement for. Despite the confusing desperation of non-academics for it to be that. Even it's creators are disappointed by it's dominance.
Well I can't edit it now. It was mostly a rant anyway. The widely held assumption that OOP is a universal paradigm has done unfathomable damage to Computer Science. It implies nothing more than that every Turing machine is a Turing machine. Then after ignoring every other factor it declares itself supreme based on nothing but it's adjacency to the one Turing machine we were already stuck with. I constantly see people fall for this contrivance in various forms. They just can't stand up to it when it's presented alongside the credence of industry.
The author's beef with encapsulation seems to be that when an object A is used as an argument in the constructor to object B, the latter needs to do a deep copy (as keeping a pointer is not "safe"), which is of course not always possible.
I'm at a loss as to what this has to do with encapsulation, and even less able to understand how any language with user-defined data types is going to be able to avoid it.
It is a very abrupt dismissal along the lines of: encapsulation doesn't perfectly encapsulate, therefore it is useless.
It is not perfect because the objects your class may encapsulate may have been passed in a constructor by reference, meaning there is code outside your class that can mutate those objects.
Even in functional programming languages encapsulation is a useful concept - often in a functional language it is implemented with opaque data structures that can only be manipulated by the module which creates them (constructors are not exported).
I don't think it always means there are not shared references; it all depends on the role of the data in question.
The problem is, sometimes you agree with only a small part of the platform. None of these things individually are terrible ideas if tastefully applied, but it all gets clumped together into one big blob of "the right way to do things" (aka object oriented programming). I blame languages like Java for selling certain ideas as The Right Way, and building walls that intentionally prevent you from using other techniques from different schools of thought ("everything is an object, no you can't write a function outside of a class").
I think the functional paradigm has a lot of good ideas too, but in my experience they're just as annoying if they're strictly and tastelessly applied in the same way OOP principles often are.
Don't be a "functional programmer", just take the ideas that are useful.
I tend to prefer languages and tools that adopt good ideas without promoting a single specific way of thinking.