Hacker Newsnew | past | comments | ask | show | jobs | submit | 12thwonder's commentslogin


it seems like less sophisticated version of reactive paradigm to me, am I wrong?


I think you are right. If the response would be a promise or similar, it would become a reactive system.

Disclaimer, I've avoided using reactive approaches as much as I can on the server side as it increases the complexity (and cognitive load) and makes diagnosing issues harder. This is my experience in the JVM world, I'm not sure how it is in other languages/platforms.

[This article](https://netflixtechblog.com/zuul-2-the-netflix-journey-to-as...) from Netflix mentions the tradeoffs they experienced when they re-engineered their API gateway to be reactive. I wonder if this is still valid as the article was written 6 years ago.


Reactive.js is hellish too. Elm (basically The Article) dropped it too in favor of this simpler approach, but keeping some interesting parts.


I guess it depends what exactly you mean by "reactive paradigm".

In my experience the commonality found between client side mobile applications and reactive principles is essentially everything is a stream, and the UI subscribes to those streams.

The approach outlined in the post is similar in terms of the UI observes a (single) stream of data, which can just be a simple (State) -> Unit) as opposed to some reactive library stream primitive.

The approach outlines is different from all the "reactive" client side applications I have seen as there is no proliferation of stream primitives present throughout the codebase, which often seems to create complex code with very little if any user benefit.

I'm not sure what other elements you see in the post which also fall into the "reactive paradigm"?


Most of the UI/UX designers that I see at work and other places are basically graphics designers. I just wish UI designers learn more about interaction design than just pure graphics design.


Just to add to this (copied from other branch)

The challenge the profession currently faces is that a lot of people go into it for the wrong reasons and who are not suited for it. Because there is no money in traditional graphic design, many graphic designers elect UX thinking it's nearly a 1:1 transfer. There is a lot of mis-match.

You really have to really like and have a good sense of human cognition and human factors first and foremost. You also have to like thinking in systems. You are basically design (engineering) solutions for how humans interact with computing in all its forms and in many modes.

Many designers, whether they admit it to themselves or to others, would really rather be designing book covers and concert posters.


And the people who ARE interested in those get less far because we don't have as many shiny things to show off to HR to get an interview in the first place.


I've hired a number of junior designers based almost solely on 'made up' projects and school projects. I know many others who have too. If you're good, or at least show potential, at the junior designer level it doesn't matter.


I took a few elective Human Computer Interaction classes in undergrad and grad school. I have never seen a company pay more than lip service to doing actual UI design.

I generally work with graphic designers who made the switch to calling themselves UI/UX designers because the realized it paid more. They’re much better than me at understanding composition, fonts, colors, visual hierarchy etc…, but they don’t know much at all about actually designing interactive UI.

In many cases the designers are the first people to work on engineering the product and they aren’t equipped for this work at all.


Conversely, many of the developers I came across in design firms were cargo cult coders who got jobs because they very confidently presented their mediocre-at-best development capabilities as plausible to non-technical people. In fact, they likely didn't even know they weren't great because they generally completed the simple tasks they were given and the people evaluating their work knew less than they did. Not knowing enough to a hire and direct qualified staff is a management problem, not a problem with the field.

TBH, the overwhelming majority of developers assume they know a lot more about UI, UX and visual design work than they actually do... I'm continually shocked by how many think bringing in a UX person to add visual polish to an otherwise complete product in any way makes sense. In software dev environments, they necessarily have a lot of say in who gets hired. If you're not clear on who does what in the design business, you get people with shiny portfolios of app UI screens assembled in Photoshop to do your "UX" if there's enough time and budget left for the "extra" sprint.


Thanks for saying what needed to be said and the elephant in the room. I see this arrogance in boot camp coders often.


Unfortunately it seems like most staffing departments seem to think that this is what UX is. Often where you'd see "web designer" or "graphic designer" before, it's been renamed to "UX designer" and the emphasis is really on visual appeal or ability to toss together web pages etc. rather than on the skill of information architecture, usability flows, user research etc. For some people there's an overlap in these skills, but not for all people.

My wife is trained in UX research and UX/UI design and is trying to break back into the job market after years of being out (kids, sick mother, school, etc.) and although she had a background in some graphic design (and marketing and content) years ago, she doesn't feel confident-in or want to emphasise on graphic design and doesn't have an up to date portfolio of that kind of thing. And what she is finding is almost all the positions titled "UX Designer" are really just "web design" or "graphic designer" with a fancy new title, and they won't look at her, really.

The few times that I've done front end work I've always found it frustrating how the UX people I worked with seemed more concerned right up-front with pixel padding and font choices and colours and animations and logos than with getting the initial storyboard and low fidelity mockups right first.

TLDR; most shops hiring "UX designer" are really wanting to hire graphic designers and pixel slingers.

P.S. if you know of anybody hiring (remote, full or part time or freelance) for UX research, content design, information architecture, and so on and who wants a mature and conscientious worker with past professional experience in the tech sector... ping me at ryan . daum @ gmail.com


The same applies to web design. Using Photoshop to create an image that looks like a website doesn't make someone a web designer, any more than creating an image that looks like a house would make them an architect.


> Most of the UI/UX designers that I see at work and other places are basically graphics designers.

What I think is worse is even when they have a formal background in UX the company never wants to utilize it. Every time it's the same junk process. Instead of deriving features from user journeys, empathy maps, and personas the company has the "UX" person generate documents that validate the feature list the customer already wanted, users be dammed. You can also forget about A/B testing or in-person testing the person at the top with a C in their title knows what their users "need" so there is no need to pay for testing.


It's funny because I went to school for computer science, I grew up doing graphic design, and have a lot of experience having done a lot of work in both departments, but never a strict UI/UX role. When I tried to apply for UI/UX positions I either never heard back, or was told time and time again that I was lacking specific UI/UX experience.

Thought it was always funny, I guess hiring managers don't see the overlap the same way I do.


I run a boring website, and I care a great deal about UX and accessibility.

It's nearly impossible to find good UX resources for boring websites. It's all "30 clothing brand websites with cool hamburger menus".

If you're building a knowledge base, you're pretty much on your own. There's only the UK government's blog, nngroup.com and a few others.


I think this article is more about alternative rasterization algorithm for 2D geometry. 'vector graphics' is misleading as vertices used in graphics APIs are vectors.


> creation of a Beizer curve: ... clearly a CPU algorithm.

isn't this just a tessellation basically? GPU-based tessellation is very common, mostly for meshes but can be used for line-like figure too.


can you elaborate on what "protocol" looks like? or can you give us some example?


I like this.

It is very hard to find out if the definition already exists or not in the codebase. This can lead to multiple definitions of the same thing or the truth.

anyone has a good way to deal with this?


Unfortunately the issue of lacking a single point of truth is exacerbated the more people who work on a project. I believe the issue in spreading around logic comes from not knowing the original intention, and asking the original authors is, IMO, the best way to fix something or add new features. Obviously knowing the original authors is not always possible, so I try to follow existing patterns.


If the codebase isn’t a total mess, one should be able to guess which components or code paths have to deal with a given truth by virtue of their purpose/function. Then one can investigate the code paths in question to find out where exactly the existing code is dealing with the respective thing.

It should be an automatic thought when implementing some logic to think about which other parts of the system need to be consistent with that logic, and then try to couple them in a way that will prevent them from inadvertently diverging and becoming inconsistent in the future.

In terms of software design, a more general way to think about this is that stuff that (necessarily) changes together (is strongly coupled) should be placed together (have high cohesion).


Some IDEs will warn you about similar blocks of code.


It's interesting to note that this principal doesn't just need to apply at a low level, e.g. code. It continues to add value when designing application architecture. Or, can be used to help refine features.


good article. my questions now is how can we avoid over-plannings and under-plannings?

a lot of time I feel like the diminishing return of a planning is pretty steep for many situations but it's hard to tell how much time we should spend for the planning beforehand. (this is a planning for planning and maybe this itself is over-planning lol).


I am amazed at how small the codebase is, and also pretty readable. great to see work like this, thank you!


this is a really good benefit of RC codebase.

Resource management becomes much more harmonious and easy to follow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: