Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Classes often aren't the simplest tool for the job (adamzerner.bearblog.dev)
39 points by adamzerner on Oct 13, 2021 | hide | past | favorite | 59 comments


This is a pretty low-quality post. Of course you can pick on Java for shoving every feature into a class. That says nothing about classes in general.

Printing: Java is verbose because it requires a class. Most languages with classes don't.

The namespacing example in JS is weird because JS has namespaces as pointed out already, but again this case is only a problem in Java where you might use a class just cause everything has to be a class.

Functions: same thing.

Objects: not all languages have a separation between objects and class instances, those that do don't require you to use classes. :shrug:

Object with shared behavior: yes, this is where classes are useful because this is practically the definition of classes.

So the bottom line is use classes for the things they were designed for. Who knew?


I believe that classes, inheritance and OOP as the leading paradigm for two decades were the worst thing to happen to programming after NULL.

You don’t need classes at all, classes are simply a mix of several concepts that are best kept separate: Currying, namespacing, data structs and closures with a bit of syntactic sugar on top. Worst cake ever.

I code in C, Rust, Typescript and Python, and didn’t code any class in the past 5 years, because classes are never the best option. The mix of data and behavior in particular is horrendous, and made much worse by inheritance (now you don’t know which behavior is used without exploring layers of abstractions).

Inheritance was a solution to a problem caused by classes and would never have existed, had other paradigms been dominant. It caused headaches and bugs for generations of programmers (Think of the ORM problems, etc.).

Just code simple, directly serializable data structures, and functions and you’ll be much happier.


I think we have two failed concepts in classes: inheritance and encapsulation done wrong.

I don't need to explain why making excessive use of inheritance is wrong. Encapsulation is done wrong because you end up with lots of hidden state and a very complicated dependency graph.


Indeed, one of the pillars of OOP is encapsulation. (= encapsulation of state = hiding state = every object can maybe be stateful, but you can't know from the outside).

Meanwhile, one of the pillars of modern programming techniques (FP, React, and others) is to make state very explicit and separated from the rest (Think State Monad, react's useState, etc), because statefulness is now correctly identified as being a major source of problems.

So OOP took us completely in the wrong direction regarding state management, for 2 decades... Understanding in 2014 that was liberating for me.


Classes turn out to conflate two or three different, unrelated concepts. Better languages split those things out so that you can use them separately. (Believe it or not, Java actually advanced best practices in this area: in C++ interfaces were just a "pattern", and many C++ fans were sceptical of the value of making a language-level distinction between interfaces and classes).

When we say "A extends B" we mean a) an instance of A composes in an instance of B b) A implements the interface that B implicitly expresses c) this implementation, by default, delegates to the instance of A from a). Sometimes you really do want to do all three of those things. But often you want to do some subset of them instead. So the language should give you ways to do parts of that without doing all of that, and then "extends" should be at most lightweight syntax sugar over that.


> Sometimes you really do want to do all three of those things. But often you want to do some subset of them instead. So the language should give you ways to do parts of that without doing all of that, and then "extends" should be at most lightweight syntax sugar over that.

Sounds like you’re describing something like C.

Leverage individual header files to implement a specific behavior rather than features concretized in a spec we don’t always want.

Making everything a class, monoid, etc. feels like the programmer equivalent of making 9 Star Wars movies where each is comprised of lightsaber fights from open to close. That’s a cool still image but … really 9 movies huh?


> Sounds like you’re describing something like C.

Not at all. C has essentially no support for interfaces or polymorphism, yet alone delegation - even its data structure support is wonky (no real sum types). If C++ classes are "all you have is a hammer", C doesn't even give you the hammer, so you end up bashing screws in with a rock.

What I advocate is something like Rust, where you have both structs and traits as distinct concepts that serve separate purposes.


> When we say "A extends B" we mean a) an instance of A composes in an instance of B b) A implements the interface that B implicitly expresses c) this implementation, by default, delegates to the instance of A from a).

This is probably the best summary of why traditional OOP is often suboptimal I've read. Clearly OOP solves (or solved) a problem that people were having, it's just that it tries to solve too many problems with the same system.

It's also worth noting that fans of "composition over inheritance" usually don't notice or mention that if you go with pure composition, you lose out on point (c) in most programming languages. Very few languages have a built-in mechanism by which a set of operations on type A can be automatically delegated to one of its elements, without at a minimum, listing all the operations to be delegated. So at least at present, traditional inheritance still provides an operation which is not available with composition.


We're starting to see languages have support for delegation - Kotlin has it and Rust was at least talking about it. I agree it's still pretty rare though.


I am confused. Is there something you disagree with? Or are you saying that the points are obvious and thus not worth making?

(FWIW, I wasn't too happy with the writing quality when I published it. I think it is fine, but not great. I spent some time trying to improve it, but the words weren't quite coming to me. I considered throwing the post away, but decided not to. I think that the central point (as in DH6 from How to Disagree[1]) is a good one, and even though I wasn't able to express it as clearly as I'd like, it is still worth publishing as more of a conversation starter type of post, not as the official/reference post on the topic. In retrospect it would have been good express this in an epistemic status section[2].)

[1] http://paulgraham.com/disagree.html [2] https://www.lesswrong.com/posts/Hrm59GdN2yDPWbtrd/feature-id...


Sorry, I didn't realize the author submitted the piece, so I apologize if my comment came across as harsh.

I think the issue for me is that until you get to objects you're listing things that classes are not designed for and in most languages wouldn't be used for anyway.

Each point sounds like:

* Don't use classes for functions, use functions for functions. * Don't use classes for top-level statements, use statements for top-level statements.

Which while true, was only ever a thing to begin with in Java which made a pretty weird design decision and forces your hand. So the whole piece reads as a criticism of Java mislabeled as a criticism of classes in general. Yes, Java's weird, but that's Java's problem, not OOP's.

The park about this theme of OOP criticism that irks me is that there's a lot of irrational class hate out there and writing that classes aren't the best option for things which classes aren't intended for just adds to it. Yes, classes are not replacements for all functions, they never were.


No worries. And I gotcha, your comments make more sense to me now.

What I was going for in giving the examples was just to build up from the most simple to stuff that is more complex. Like: "Here they aren't the simplest tool for the job. What about here? No. What about here? No. What about here? Not quite. What about here. Yes!"

It felt like a solid way of making the point. I didn't mean to imply that I think it is common for people to use classes for eg. print statements. I tried to protect against this interpretation in the the last section:

> I want to keep the scope of this post narrow. I don't want to start getting into the weeds about the pros and cons of using classes vs using alternatives to classes. My goal here is just to point out that simpler alternatives often do exist. Hopefully that perspective can help better inform the decisions you make when writing code, and the opinions you form about programming languages.

---

> The park about this theme of OOP criticism that irks me is that there's a lot of irrational class hate out there and writing that classes aren't the best option for things which classes aren't intended for just adds to it. Yes, classes are not replacements for all functions, they never were.

I hear ya. I think that like most things, there are people at the extremes of both sides of the spectrum saying stupid stuff. On on end you've got the people who love classes and think eg. that it's ok to not have first class functions. And then on the other end you've got people who hate classes and think eg. that there is basically no good use case for them ever (I came across this a decent amount actually as I was googling around before writing this post). It sounds like you and I are both somewhere in the middle and agree about the broad strokes.

As an example, of the "love classes" extreme, there is a particular pattern I've been forced to use before that I hate. In a ruby codebase, instead of creating a utility function, we have to create a class with an instance method named `call`, instantiate the class with the arguments you intend to use for the method, and then call the method with no arguments. So like instead of `getFullName(first, last)` you'd have to do `NameService.new(first, last).call`.


But many other languages are like this. AFAIK, C# is the same - there are no first class functions allowed, only classes.


In practice, the difference is pretty small in C#. For almost all purposes classes act like a namespace (System.Math), and can be imported like one. Former 6 will add even more syntactic sugar, so that methods can be defined outside classes.


You can import static methods in Java too. Works the same.


In C# I either use classes as namespaces or use them as data structures. Using and static keywords help a lot.

I write lots of C# but I try to use as little of OOP as possible.


The JavaScript namespacing example is wonky:

  export default {
    email: "alice@example.com",
    login: function () { ... }
  };
By writing `export`, you’re showing you’re using modules; modules are namespacing, and this file potentially named currentUser.js can contain this instead:

  export let email = "alice@example.com";
  export function login() { ... }
That’s pure namespacing. And yes, export bindings are live, so you can use that like this (though you’re likely to surprise some people—but then, the whole style of the code was dubious in many languages anyway):

  import { email, login } from "./currentUser.js";
  email = "bob@example.com";
  login();
Back to the original example: speaking generally, `export default { … }` is a code smell that should normally be replaced with precise exports, which generally make life easier for tooling and consequently for humans. (As a consumer, the precise exports version requires only changing `import currentUser from "./currentUser.js"` to `import * as currentUser from "./currentUser.js"` in such cases.) Seeing so often such abuse of the default export, I have mused whether perhaps it was a misfeature, an injudicious appeasement and compatibility measure for older patterns.


Default export itself isn’t the misfeature. Mixed named/default exports is. There are several use cases/patterns where a single export is common (though because of the same concerns you raise, and others, I tend to use a single named export anyway).


Ah, you are right. I didn't realize this when I wrote it. Thanks for pointing it out.


As a professional dev who has made a career out of working in oo languages and codebases, I agree, and it took me far too long to realize that when it comes to oo, the emperor has no clothes.

To this day, oo advocates can't even agree on what oo even is or means.

Apparently oo as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.

Why? Who knows! It was never really explained why literally one of the most complex systems imaginable, one that we still really have very little idea how it even works, should be the model for what could and should probably be a lot simpler.

Today's modern oo languages are probably very far from what Kay envisioned (whatever that was), but it's remains unclear why classes and objects are "better" than the alternatives.

And before anyone goes and comments aksully code organization blabla like yes but code organization can be great or shit in oo or fp or procedural codebases, it has nothing to do with the "paradigm".

Let alone that the entrenched, canonical, idiomatic coding styles of most modern oo languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors. Granted, most have now started to come around and are adopting more fp features and ideas every year, but still.


Kay's ideas are about building complex systems from isolated components ("cells") which may evolve independently. The web is an example: there's a server and a page and a browser and the user's extensions, all changing without reference to each other. How can they all work together?

Answer: keep them isolated (no direct function calls), defer most choices to runtime, make it easy to interrogate each others' interfaces. I think that sums up OO. "Call this function if it exists at runtime" is trivial in JS, but a crisis in Haskell.

You may not need that stuff. If you're statically compiling your code and all dependencies, it seems pointless. But if you're building a system which talks to code not under your control, code you cannot just refactor at will, then I don't know of any better techniques than OO.


Really depends upon what you're talking about. Examplle you can have a `Maybe (IO ())` data which would indicate "call this for side effects if it is present, otherwise do not". You could also model this as e.g.

   foo :: [(Text, IO ())]
so thsi would be a list of capabilities, each with a name.

Ofc this is a bit tricky, you would have to do a lot of heavy lifting in C to pull this off; but it wouldn't be hard to do in e.g. JS at the FFI layer before Haskell.


I generally agree but in case it helps anyone:

> Apparently oo as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.

> Why? Who knows!

A very probable good mental model for what was intended is Erlang/OTP or the Actor Model generally. The idea of encapsulated systems which interface opaquely is very well represented there. And the reasoning and benefits are as well.


Don't get me wrong, it's an interesting idea which deserves pursuing, if nothing else but to satisfy our curiosity and seeing to what if anything it's applicable and suited. (and even if the answer turns out to be "nothing", we've still learned something!)

But going from there to making strong claims about it being a more or less universally superior paradigm for computing and writing code, with little to zero evidence, that's a huge, huge stretch.

To the degree Erlang and Actors work, I think that's kind of a happy coincidence, and not due to any rigorous work on Alan Kay's part.


You may want to spend time with Joe Armstrong’s take on the matter. If nothing else it’ll be interesting.

And the elegance of something may or may not speak to rigor, but that doesn’t mean rigor doesn’t bear it out. It’s not as if Erlang is some flash in the pan, it’s been an incredibly important part of large scale real world computing.


On my first year I honestly thought java was working like Erlang (which I didn't know of at the time). Objects being independent parallel threads sending messages. Then teacher laughed at the idea.


Clean architecture is worth a read. I like his description.

"Object-Oriented programming imposes discipline on indirect transfer of control."


I've read all of Uncle Bob's books.

IMO, Robert C Martin is to Computer Science what Nassim Taleb is to Economics and Finance and what Jared Diamond and Graham Hancock are to anthropology and archeology - they're psuedo-intellectual quacks pushing a bunch of impressive and authoritative sounding absolute bs with zero evidence behind it on unsuspecting victims.

A lot of Robert C Martins pieces are just variations on his strong belief that ill-defined concepts like "craftsmanship" and "clean code" (which are basically just whatever his opinions are on any given day) is how to reduce defects and increase quality, not built-in safety and better tools, and if you think built-in safety and better tools are desirable, you're not a Real Programmer (tm).

I'm not the only one who is skeptical of this toxic, holier-than-thou and dangerous attitude.

Removing braces from if statements is a great example of another dangerous thing he advocates for no justifiable reason

https://softwareengineering.stackexchange.com/questions/3202...

Which caused the big OSX/iOS SSL bug in 2014, see https://www.imperialviolet.org/2014/02/22/applebug.html

This link and thread on hackernews is good too

https://news.ycombinator.com/item?id=15440848

    The current state of software safety discussion resembles the state of medical safety discussion 2, 3 decades ago (yeah, software is really really behind time).

    Back then, too, the thoughts on medical safety also were divided into 2 schools: the professionalism and the process oriented. The former school argues more or less what Uncle Bob argues: blame the damned and * who made the mistakes; be more careful, damn it.

    But of course, that stupidity fell out of favor. After all, when mistakes kill, people are serious about it. After a while, serious people realize that blaming and clamoring for care backfires big time. That's when they applied, you know, science and statistic to safety.

    So, tools are upgraded: better color coded medicine boxes, for example, or checklists in surgery. But it's more. They figured out what trainings and processes provide high impacts and do them rigorously. Nurses are taught (I am not kidding you) how to question doctors when weird things happen; identity verification (ever notice why nurses ask your birthday like a thousand times a day?) got extremely serious; etc.

    My take: give it a few more years, and software, too, probably will follow the same path. We needs more data, though.


Some of the biggest messes I've had to clean up in my career were created by inexperienced people blindly following Uncle Bob's principles.

Pragmatism should come before all else in software IMO.


I'd be keen to hear more of this story if you've got the time to share?


That bug would be possible with braces as well. That copy paste error will not automatically disappear by using braces.


I agree with your comments, but interestingly Taleb applied to SWE would be on the polar opposite of Clean Architecture. As the comment you quoted: the "data" comes from practice and caring, the "Clean Architecture" is the pseudo-science that derived from theory instead of practice. Another viewpoint is DHH's keynote where he aligns most of web development to writing (Information System) instead of trying to derive hard rules from theory.


I think where these decisions gets hazy to me, especially in a language like Javascript, is the language is chock full of enough features that I often find the decision to make class/function/object to be somewhat negligible when the purpose is as trivial as the examples provided.

You can make those factory functions nearly as concise as the class examples if you wanted:

    const createEmployee = (firstName, lastName) => ({ firstName, lastName })
    const createContractor = (firstName, lastName) => ({
      ...createEmployee(firstName, lastName),
      //overloaded propeties here 
    })
    const createFullTimeEmployee = (firstName, lastName) => ({
      ...createEmployee(firstName, lastName),
      //overloaded propeties here 
    })
I've always thought more in terms of consumption - who will be using this code and how will it be used? My probably unscientific approach being a dutiful Typescript worker bee has been to take the general approach:

- Try to just use a function, especially when your output will only be consumed one way

- If the domain of the logic that you are working must encapsulate multiple ways to be consumed and there isn't a clear primary use case, and much of the logic and state should be shared between functions, consider a class

I don't find the end result to be significant for anything other than code cleanliness/readability. Admittedly I've never really had language features themselves be performance bottlenecks doing mostly frontend web work. This may be too specific of an observation of Javascript itself to be a broad opinion though. I'd love to hear someone else's opinion as my experience is not from a compsci background.


> I don't find the end result to be significant for anything other than code cleanliness/readability.

Agreed, I think. But isn't readability a very important goal? "Programs must be written for people to read, and only incidentally for machines to execute."

> I often find the decision to make class/function/object to be somewhat negligible when the purpose is as trivial as the examples provided.

Yeah I agree with that. The reason I chose trivial examples was just to make it easier to understand.


A comment in the Stack Overflow post that the OP links to led me to https://en.wikipedia.org/wiki/Liskov_substitution_principle which seems to be far more useful a concept to know than the OP:

> Substitutability is a principle in object-oriented programming stating that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., an object of type T may be substituted with any object of a subtype S) without altering any of the desirable properties of the program (correctness, task performed, etc.).

If your Ugly Duckling doesn't quack like a duck, or is only raised by a duck but is not one itself (a has-a relationship, ergo composition!)... it probably shouldn't be a subclass of Duck. But if it does those things, and you gain more than you lose from the simplicity of passing around simple objects (okay, the analogy has completely frayed here), it should!


The interview[0] with Barbara Liskov about the principle is also really interesting.

> Although it may seem a little strange these days, in those days it was a long way from the east coast to the west coast, and of course we had no conference calls in those days too. That whole business about object-oriented programming was developing on the west coast and on the east coast we were mostly working on data abstraction, and the two worlds were kind of separated. So I knew the name, but we didn’t run into each other at conferences and there wasn’t much crosstalk going on.

> In the 1980s, I was asked to give a keynote at OOPSLA, which I think it was maybe the second OOPSLA. It hadn’t been in existence very long. So I decided that this was a good opportunity to learn about what was going on in object- oriented languages. So I started reading all the papers and I discovered that hierarchy was being used for two different purposes. One was simply inheritance. So I have a class, it implements something, I can build a subclass, I can borrow all that implementation, change it however I want, add a few extra methods, change the representation. Whatever I want to do, I just sort of borrow the code and keep working on it.

> The other way it was being used was for type hierarchy. So the idea was that the superclass would define a supertype, and then a subclass would extend this to become a subtype. I thought this idea of type hierarchy was very interesting, but I also felt that they didn’t understand it very well. I remember reading papers in which it was clear they were very confused about it, because one in particular that I remember said that a stack and a queue were both subtypes of one another. This is clearly not true because if you wrote a program that expected a stack and you got a queue instead, you would be very surprised by its behavior. The difference between LIFO and FIFO is a big deal.

> This led me to start thinking about “What does it really mean to have a supertype and subtype?” And I came up with a rule, an informal rule which I presented in my keynote at OOPSLA which simply said that a subtype should behave like a supertype as far as you can tell by using the supertype methods. So it wasn’t that it couldn’t behave differently. It’s just that as long as you limited your interaction with its objects to the supertype methods, you would get the behavior you expected.

[...]

> Meanwhile I was working on distributed computing, I was particularly interested in viewstamped replication and some of the other work that was going on in my group at the time, and I wasn’t really thinking about this until sometime in the ’90s when I got an email from someone who said, “Can you tell me if this is the correct meaning of the Liskov substitution principle?” So that was the first time I had any idea [laughs] that there was such a thing, that this name had developed.

[0] https://amturing.acm.org/pdf/LiskovTuringTranscript.pdf page 29


Classes are nice when used in conjunction with a framework. Most developers realize that the Car that extends Vehicle from CS 101 is a limited pattern that isn’t very useful. And OO languages often force developers to declare classes when they really need namespaces or first class functions.

But if you are working with something like Spring Beans or React Components, classes serve as a decent unit within a framework. They help guide developers to factor their code in blocks of roughly similar size and scope. They help organize a codebase into something explorable for new members. Like chapters in a book, they don’t have to all be the same size, and may vary between books/codebases. That being said, most languages overload classes to be record/objects and namespace modules.

I like functions, and generally agree with the post. But when all you have is functions calling functions it’s not much better than spaghetti in large projects. You can still make a mess with classes, but they give a conventional way to delineate layers.


Just because Java implements object oriented programming in a rigid and clunky way doesn't make your Java anecdote a general principle. This isn't a very good blogpost.

But I'm glad HN is back about arguing about petty programming stuff nonless rather than partisan politics.


This article could really use an example of composition at the end to feel complete.


I agree! I actually wanted to include one, but as I started to explain it it started getting too wordy, which made the post feel like it was more about composition vs inheritance than classes vs alternatives, thus distracting from the main point.

I think this is mostly a limitation of me as a writer though rather than being an inherent thing. Ie. I think there's gotta be a way to give that example in a concise enough way where it doesn't distract from the main point.


Hm. I think your post would be a bit more compelling if you skipped examples of verbosity that are due to Java being shitty. JS would be simpler for the point? [nevermind the person who said that it's a low-quality post; it's just imperfect, not low-quality!]

The compositional method of this is for each employee to have an `EmploymentStatus` member which contains information relevant to their employment status -- so `ContractorStatus` would contain hourly rate as a field; `FullTimeStatus` would contain salary as a field, etc.

The class-based version of this puts the same attributes on the classes directly. Without additional fields that vary depending on the type, it doesn't make a lot of sense as an example. The `generatePayCheck` function doesn't demonstrate this, because it could just as easily be a standalone function that switches on the class or on an `employeeType: Contractor | PartTime | FullTime` field (and in languages that are less janky, this can be a discriminated union that includes additional fields in the type. I guess you can do that with Java via interfaces as well but it's not gonna be fun).

A function that switches on the type, or an instance method on varying classes, or a discriminated union, or a composed attribute -- these are all isomorphic... up until you find yourself needing another type of variation at the same time. Perhaps you're modeling a university workforce and people are also either `Administration | Educator | Service` employees. Now you can't have 3x3=9 total classes, so you _need_ composition (for at least one of the type hierarchies, possibly both).

The best place to use class hierarchies is when there is one 'canonical' inheritance hierarchy, rather than a bunch of ones that should be ideally compositional. What classes give you that unions don't is built-in methods, but you need those methods to be relevant to every type in the class hierarchy, so they really need to be intrinsic to the type of object you're talking about. For employees, `getPaycheck()` is probably a reasonable example of an actual class method for this reason.


This is judging a fish on it's ability to fly. In other words, it is judging an object oriented program on its ability to resemble top-to-bottom sequential execution style programming, while ignoring that the language is commited to a different paradigm.

Some of the criticism is really superficial:

    function createEmployee(firstName, lastName) {
        return {
           firstName: firstName,
           lastName: lastName,
       };
    }

OO-Programs have these weird factory functions called constructors that do exactly that. To me, it makes sense that such functions should share their living space with the class that they are creating. Where else would you like to put these functions? "Utils"?

As for classes, namespaces and objects: as others have pointed out, this is an implementation detail. In Python, for example, classes are objects. Besides, classes and objects are functions (and the other way around) so it doesn't matter all that much in practice.

What buggs me more is code like this:

    class SayHello {
        public static void toPerson(String name) {
            System.out.println("Hello " + name);
        }
    }

Sure, comparing it to an empty Python file which just says "print" this looks like an aweful lot of code. However if the paradim is poorly understood, it will seem to be working against you:

    class Name {
        public Name (string name) {
            this.name = name;
        }

        public static void greet() {
            System.out.println("Hello " + this.name);
        }
    }

The programming is different in that you create a declarative network of objects that then ... act. At this level, there is hardly any difference, except that the name has been introduced as a proper concept. Many people will argue that such value objects are expensive or inefficient (which they are) but if that cost hits you hard, you should be asking yourself why you picked a high level language to begin with.

Yes, in such a paradigm a place is needed where the entry point lives, but I found it fundamentally useful. Whatever you call the class it: - Can read and encapsulate access to env variables - It can accept commandline arguments (as main(argv[]) indicates) - It's the place to setup and configure your (object)- dependency tree (what IoC/Dependency Injection-Containers are doing) - Or it can be the ultimate exception handler for messages deeper in your program, e.g. where you would start your main message pump / game loop, etc.

The rest is ergonomics.


I hate how Java and C# forces every function to be a method.(i.e. member of a class)


I’d go as far as saying classes are never the simplest tool for the job.

But maybe the past 10 years of functional-ish programming have broken me. There are lots of problem areas where objects fit nicely. Actor based programming is very elegant with classes and objects I think.


Sadly C++/Java/C# have given classes a bad name. I think a lot of that is because in those languages classes are pretty much the only tool in the toolbox. After a while you are going to really hate that tool, no matter what it is. Also, all those languages are a pretty poor implementation of classes and OO.

Modern multi-paradigm languages are great because you can use classes when classes are the right thing, and functional programming when that's the right thing.

Classes are a great tool, but not the only tool.


> Modern multi-paradigm languages are great because you can use classes when classes are the right thing, and functional programming when that's the right thing.

There doesn't seem to be an either/or situation when using CLOS in Common Lisp.


The two biggest problems with OO as it's traditionally used are:

1. Implementation inheritance creates brittle, tightly coupled and difficult to test code.

2. Mutable classes force you to structure data updates in a particular, inflexible way. In a world where you're sharing data across multiple clients, serializing and deserializing, using things like Redux to structure updates etc this creates a huge amount of internal friction.


There are things like static classes and functions that can co-exist.


Isn’t a static class essentially just a namespace?


Static classes still have constructors that gets invoked to create the static instance. Namespaces doesn't have constructors and instead each member of the namespace gets constructed separately.


I guess it depends on the language but I've definitely had problems with codebases that depended heavily on static initializers. IMO it's usually better to explicitly bootstrap your code.


Most developers aren't fully aware of how static initialization works so putting it in your code usually isn't a good idea even if you know it, no, but it is a difference between static classes and namespaces.


I agree with that.

My class-based solutions are more prone to failure than other solutions I've used before. It always gets more complex than it should.


but they are great for interviews with white board coding questions, which are great for getting a job nowaday!


How so? Classes are extremely verbose IMO, which is the last thing I'd want in a white board interview.


As someone who occasionally watches competitive programming for fun (and competitive programming is basically competitive whiteboard interviewing) it gives me quite a bit of perverse pleasure that even in languages with great first-class class and object support the norm is to write functions that operate on global variables.


Where do you go to watch competitive programming? That sounds like it might help with Leetcode-style bullshit.


I just follow a couple YouTubers who make videos about it. For example:

William Lin - https://www.youtube.com/channel/UCKuDLsO0Wwef53qdHPjbU2Q

Errichto - https://www.youtube.com/channel/UCBr_Fu6q9iHYQCh13jmpbrg


I only ever use classes to avoid global variables (mostly when using matplotlib animation)

And even then I usually try to make a function of a function whenever possible.

I find classes really hard to grasp (multiple inheritance/the syntax often sucks/diamond gotchas). Maybe I just don't understand them, but I've been able to get away without understanding classes for long enough




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: