Hacker Newsnew | past | comments | ask | show | jobs | submit | eloary's commentslogin

No such correlation exists.


"Gender differences in personality tend to be larger in gender-egalitarian societies than in gender-inegalitarian societies"

http://onlinelibrary.wiley.com/doi/10.1111/j.1751-9004.2010....

"Most importantly and contra predictions, we showed that economically developed and more gender equal countries have a lower overall level of mathematics anxiety, and yet a larger national sex difference in mathematics anxiety relative to less developed countries."

http://journals.plos.org/plosone/article?id=10.1371/journal....

"Previous research suggested that sex differences in personality traits are larger in prosperous, healthy, and egalitarian cultures in which women have more opportunities equal with those of men. In this article, the authors report cross-cultural findings in which this unintuitive result was replicated across samples from 55 nations (N = 17,637)."

https://www.ncbi.nlm.nih.gov/pubmed/18179326

"What might be surprising to most people, as we shall see, is the specific pattern of cross-national variation in the gender gap in authority. To cite just one example, in the United States, the probability of a man in the labor force occupying an "upper" or "top" management position is 1.8 times greater than the probability of a woman occupying such a position, whereas in Sweden, the probability for men is 4.2 times greater than for women."

https://www.ssc.wisc.edu/~wright/Published%20writing/CC-C9.p...

Your move.


It was exactly this kind of fun that led me to stay in my altcoin holdings and not participate at all in BCH. Liquidity puts a serious damper on a lot of slick crypto trade ideas - it's a "tortoise and the hare" kind of scenario.


Business news sources tend to be clear headed, because they are in the space of news you can put a dollar value on, and hence will pay for to make more accurate. That doesn't make them perfect, since they still tend towards an editorial agenda favoring "business as usual" power centers over disruptive forces, but I don't feel like I'm being fed outrage most of the time.


>Business news sources tend to be clear headed, because they are in the space of news you can put a dollar value on, and hence will pay for to make more accurate.

It also pays to put out dishonest hit pieces against competitors, or dishonest reviews of your own products to make them seem better than they really are.


We get to see those because they're advertising. That's the whole point of "you have to pay." When I read a hype story about a company someone else is paying.

Really good, cheap or free info about the present and future tends to lie not in a consumer paper but in dense, wordy PDFs that are cast offs of internal researchers at major multinationals and government bureaus. Those documents are often released to tell investors and taxpayers "here is how we view the situation and why" and their structural mission is to get the assessment right so that they make correct moves. There is little storytelling or human interest to these documents, but they are valuable for the same reason that knowing about the weather is valuable.


Uh, I'd be parsimonious when choosing which reports to give attention and credence .

Within the industry there is a herd tendency.

Results of models which are far off from others get tweaked. Once data is sufficiently in line, only then does it get reported.

I agree to this though, there's going to be a dense PDF somewhere.


>Those documents are often released to tell investors and taxpayers "here is how we view the situation and why"

Indeed these tend to be more data oriented but they are still not without their great faults.

Case and point: the echo chamber of 2008 that culminated from the reports of the ratings agencies, academics, financial analysts, government regulators et al who were all pushing the same false narrative to the detriment of the investors and tax payers


I believe the right way to go is to focus on library code. If the long lived code is to prove exceptional it has to do so by being grabbed and bolted on to the new thing over and over, and that tends to favor an approach of libraries that assume very little, don't have many dependencies themselves, and opt for simple/robust API over being efficient. The API user can always recode the API for efficiency in their use case while as a library vendor your ability to guess at hotspots is limited at best, and as a consumer one is always looking for a library that can be used easily and disposed of quickly. In effect, "design by placeholder".


I don't argue your point here, but I'm seeing more and more developers that cut their teeth in the age of libraries that are terrified to touch library code.

There's an assumption out there for some that library code is flawless, and there's an imposter syndrome type aversion to touching it, with the developer fearing the code was made in a certain way for a certain reason and that they're not skilled enough to work on it.

This is made worse by the few brave developers that will dive in getting all the anger if they make a mistake, at a massive scale if the library is popular.


I suspect that the MS and Java tech stacks might factor into this a bit.

I can't speak for other languages, but in Go, the ability to navigate to the definition of any documented symbol via GoDoc has been most interesting for learning about how the sausage is made in certain areas. I don't have a comprehensive knowledge of all of Go yet, but I've learned that environment variables, for example, are backed by a `map[string]string`.

Perhaps something like that might help with the imposter syndrome about it?


I work on a closed source project using closed source libraries with some restricted source access. In general, if I find a bug in that library, it's less hassle for me to report it to them, and move on, than report it, fix it, and deal with the upgrade path later. It's unfortunate, but if I fixed every bug I found in the third party libraries, I'd never get any of my own work done..


This tends to be mainstream opinion today: figure out the timeless essence at different scales, decompose it into abstractions, freeze their interfaces so that people can start relying on them, and so on. But it doesn't seem to have helped for forty years of trying to do it. The world changes too quickly, we aren't quite as good at designing libraries as we think, the world is filled with historical accidents in interfaces. With hindsight, it seems clear they were prematurely frozen.

Rather than dismiss these observations as isolated cases of people not practicing good behaviors, I tend to see them as evidence that we should be creating libraries far more conservatively, freezing interfaces far more late in the life cycle, perhaps even decades late.

If I'm right, we are also overusing industrial notions of assembly lines and division of labor. Libraries with non-trivial functionality take a long time to get right, and in the meantime they are produced more like guilds of craftsmen than factories. (Even if the products themselves permit factory-like operation at scale.) In that initial bake-in period we are ill-served by conventional metaphors of software components, building blocks, etc. We should be dealing more in vertically-integrated self-contained systems rather than plug-and-play libraries. More OpenBSD, less `gem install`.

I've been thinking about this for at least five years, ever since http://akkartik.name/post/libraries. More: http://akkartik.name/prose


Libraries do take a while to get right, but I think we can only get them right by making them as libraries. All good libraries I've seen started out as part of a vertically-integrated piece of software, but if there's currently an immature library for x, using that immature library generally puts us further along the path to getting a mature library for x than writing your own vertically-integrated implementation of x.

I'd focus on making it easier to migrate between libraries, easier to improve interfaces, and so on. (I'd argue to a certain extent that's already happened, and that's part of why we're using more and smaller libraries).


Getting more users for x certainly helps mature it. However, making it a library also tends to freeze it. So ideally we'd have ways to encourage people to use something without guaranteeing its interface. I think this is a social problem; we need more libraries that have signs on them saying, "alpha software, compatibility not guaranteed," and we need greater awareness among developers that this is a good thing, that being willing to switch interfaces every once in a while results in a better eco-system in the long term.


Agreed. But I think the idea of listing and cutting down your dependencies (which is what I understood your post to be suggesting) is contrary to that; rather depending on a huge number of tiny libraries makes it easier for libraries to evolve more flexibly.


In general I don't see the connection between the size or number of dependencies and how easy they are to evolve. Truly tiny libraries like left-pad have trivial implementations. Why not just inline them into your project? Then they're no longer libraries, just functions. You get the abstraction benefits, but you don't need to enter into a counter-party relationship with the author.

At scales above the absolutely trivial I think libraries evolve more flexibly based on social rather than technical considerations. It's about not pissing people off when you change the interface. Even something as complex as Go was able to make incompatible changes pre-1.0.


My sense is that smaller libraries are more able to evolve at their own place - e.g. the separation of django-rest from django core, or languages moving more things out of their standard library. Pre-1.0 is the easy part, we'll see how Go does is 5 or 10 years once it has a base of programs to maintain compatibility with - I predict it will find it harder to evolve and its library will fall behind.


I agree. It's especially pungent in JavaScript, in which the average 'JavaScript developer' is playing some perverse game of library pokemon.


Definitely stealing Javascript Pokemon. Seriously why are people still adopting more libraries to perpetuate this insanity.


Further thought: this is a similar problem to creating too many standards. https://xkcd.com/927/


Simply: Stretch every day, morning and evening or even more often. Start with physical therapy type exercises and then gradually add stretches from ballet, gymnastics, wrestling, etc. It'll put a literal "spring" in your stride, improve posture and range of motion, and lower perceived pain both during exercise and at rest.

The weights also improve flexibility but not in as balanced a fashion. Stories abound of lifters with the posture of a gorilla.


This looks like an approach more akin to federation (or more accurately confederation) on social networks than the "distributed" notion of blockchain trust, which in practice, as noted, still exhibits centralizing political forces per chain, but is robust to most forms of direct attack. I do think that a mix of approaches is what will happen in the future since the two methods express different levels of trust and liquidity of transaction.


Pascal is a smaller language, and so easier to come to grips with at first glance. The syntax is tailored towards some verbosity which usually appeals to beginners. However, the downside is that you don't have a lot of leverage to do things in Pascal: Python pushes you down a path where you are writing something productive very quickly and can reach into the standard libraries to do many tasks - it makes some things very easy. Pascal requires some time to prepare the solution and encode it in syntax, and it's harder to find what you need, but there is usually code somewhere online that you can adapt. This is a lot to ask of beginners who want to do practical work, though. As of right now Pascal retains a lot of strength in desktop GUI code.

In terms of safety/dangerous code, modern Object Pascal style lets you be as dangerous as C if you want, but the default semantics are much more comfortable, and take you away from the danger zone more often.

Both Python and Pascal are relatively easy to get up and running with, and have pretty solid, standardized toolchains for industry use: in contrast C and C++ leave the build process relatively undefined and varying between compilers and platforms, which has resulted in a huge amount of friction to get any project building on a new machine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: