Hacker Newsnew | past | comments | ask | show | jobs | submit | yokohummer7's commentslogin

> likely in order to have those paths easier to copy and paste, but it does so only if !isatty().

I thought the reason was to aid shell scripts that assumed no whitespaces in file names, wasn't it? Also, I believe I've seen single quotes when using `ls` on a terminal, so the behavior is not only for `!isatty()`.


No, the reason is to aid human.

    $ touch 'a   b' c d
    $ ls
    'a   b'   c   d
Without quotes you would get something like

    $ ls
    a   b   c   d
and that would be confusing.

It wouldn't help shell scripts that assumed no whitespaces in file names anyway. You would get something like "'a", "b'", "c", "d" tokens which make no sense anyway.

The only sane way to iterate over files that I'm aware of is to use something like

    find . -type f -print0 | while read -d '' fname; do echo "fname: '$fname'"; done
But that's bashism...


All this is horrible incidental complexity stemming from absolute madhouse insane splitting and substitution rules in shell languages.

Splitting on spaces, expanding asterisks, shell scripting is a minefield. You would basically never want any splitting to happen. Applications like reading a CSV by splitting using IFS are dirty hacks, that break with the slightest additional file parsing complexity (escapes, quoting etc).

In Python you can just do `os.listdir('.')` and it will actually do what you want. No 5 layers of "oh, actually" and "yes, but what if", which inevitably happen in threads discussing the simplest shell operations. It just works as intended. If you only want the files and not the directories, you can do `filter(os.path.isfile, os.listdir('.'))`.

There is no reason to use shell scripts for anything more complex than a few lines or for one-off interactive work.


To be even more pedantic, you actually want to do

    while read -d '' fname
    do
      echo "fname: $fname"
    done <<<"$(find . -type f -print0)"
Because if you wanted to export a variable from the loop body, it wouldn't work in your example -- the loop is running in a subshell when you pipe output into it. <<<"$(...)" is not exactly the same thing (it doesn't stream input) but it does allow you to get around this problem. For folks who suggested glob expansion -- that's okay for simple file listings, but find lets you do more complicated ones that can be quite useful. Not to mention that the suggested

    for fname in *; ...
Is not actually doing the same thing -- find lists files recursively.


find(1) is super useful for any hierarchical or filtering case, and I use it often, but it can also be arcane and unwieldy. In many circumstances your cwd shell iterator can be simpler:

    for fname in *; do echo "$fname"; done
This skips dotfiles, of course, but that’s intentional. The point of dotfiles is to be skipped.

With bash or on GNU systems try

    printf "%q\n" "$fname"
instead of echo, to obtain an escaped string.


I thought that it would iterate over space-separated tokens. But it works, some magic here, thanks, that's definitely the proper way to go.


It's because the result of a glob is not immediately subjected to shell expansion.


won't this echo "*" if there are no files in the directory?

I always have to look up the 'safe iteration' invocation when iterating files in a directory, because it involves jumping through a few more hoops than is really reasonable.


Yes. You can change the behaviour in bash by setting the nullglob option (shopt -s nullglob).


    find -exec

?


Won't work for complex loop body.


> %n takes a pointer and writes (!!) the number of bytes printed so far.

> Okay, everyone probably knows this. Let's get a bit more advanced.

Ok, but I didn't know about that. What's the use?


One use is aligning outputs:

  char *prefix = "example";
  char *line1 = "line 1";
  char *line2 = "line 2";
  printf("%s: %n%s\n", prefix, &n, line1);
  printf("%*s%s\n", n, "", line2);
will output

  example: line 1
           line 2
That’s a bit more robust than using strlen(s)+2, where you have to keep that magic constant 2 in sync with ": ". Moving ": " to a variable and using strlen(s)+strlen(separator) would fix that, though (at the price of speed, unless you’ve a compiler that optimizes that away)


> That’s a bit more robust than using strlen(s)+2

strlen wouldn't even be an option if you were formatting something that's not a string e.g.

    int n;
    int prefix_num = 23;
    char *line1 = "line 1";
    char *line2 = "line 2";
    printf("example %d: %n%s\n", prefix_num, &n, line1);
    printf("%*s%s\n", n, "", line2);


This only works if you're not dealing with Unicode, where the number of bytes, the number of characters, and the width of those characters can all vary.


You don’t need Unicode for that. It also requires you use a monospaced font.

I think the feature predates that and Unicode, though. But even then, it fails if you underline text the way it was done at the time, either by using backspace and underline characters or by using termcap (https://en.wikipedia.org/wiki/Termcap)


This makes me wonder if there's some sort of Unicode equivalent for this?


wcwidth()


Thanks, I wasn't aware of this. Here's a man page for anyone else that was interested: https://man7.org/linux/man-pages/man3/wcwidth.3.html


I'd never heard of %n, but I use printf's return value (the number of bytes written) for this kind of purpose, so

  n = printf("%s: ", prefix);
  printf("%s\n", line1);
  printf("%*s%s\n", n, " ", line2);


Yeah but the difference is that you can use %n wherever you need it in the format string. Depending on what arguments come after it figuring out the length of the printed arguments might not be trivial whereas with printf, it already has to keep a count of it during execution in order to return it at the end so it's easy to add support for it.


I seem to recall using it to auto-adjust column widths.

And we did something with it involving string translations. Since we didn't control the translated format string we couldn't just count the characters in the source code.

But it's been a really long time and I don't recall the details.


Well the fun use is using it to exploit printf format string vulnerabilities. Microcorruption has a fun level involving that IIRC.


Normally you'd use %n with input functions like scanf(), not printf().


To support format string vulnerabilities! /s


> Does anyone else experience the same?

The first time I experienced such a feeling was when Visual Studio .NET came out. Compared to the previous version, Visual Studio 6, the dialogs and wizards in VS.NET gave me a "web-like" feeling. I guess it was one of the first attemps to use the web technologies to make a desktop app. Along with the Active Desktop feature, they were truly ahead of time.

However, that doesn't mean I loved them. VS.NET was slow as hell and I believe it was one of the reasons why VS6 lasted for so long.


It's strange nobody mentioned NX (NoMachine). When I tried it ~10 years ago, it was much faster than VNC, nearly identical to RDP. Why isn't it more popular?


NX was fine in the 3.5.x series, but the client is wonkish from 4.x (and out?). X2Go is based on the same (open-source) underlaying library, and seems to be better. At least in my opinion, and maybe in others here too since nobody apparently mention NX anymore.


When I tried it last year it was a pretty horrible experience because it installed client and server on Windows and macOS when I just wanted the client. I had to go in and kill some services for the server part. Having to do stuff like that for security reasons is a bit of a turn off.

The actual performance of NX was good, so I'll give them that. I think other people having licencing concerns, but I'm fine with it.


Nowadays there is a free enterprise client that does not install a server nor does it require admin rights. For me it works quite good, close to RDP.


Sometimes I've weird experiences with OSX as the server but in general its fairly good (miles better than VNC at least)


Even better than NX is ThinLinc, much more reliable, faster connections and handle dynamic resolution resizing.


I was also annoyed by it not remembering the scroll position. The site doesn't live up to its name.


This problem also seems to stem from the fact that Zoom has been used primarily in the corporate settings until now, which kinda validates their claim. Definitely not ideal, but understandable.


which exacerbates the problem.

A possible scenario is that users continue to browse the user directory and join meetings with their Zoom account even after leaving a company.


Companies that are concerned about this can set up SSO authentication with zoom I believe. So once the user is removed from the company’s directory server they wouldn’t have access to the zoom address list either.


I'm not dismissing the overall security point, but this seems like a pretty weak attack vector. If your company is routinely not deactivating accounts associated with your domain as part of your offboarding, being able to see e-mails and pictures of your employees is not your biggest problem.


Well, not if you still can log in to Zoom even if your email account was deactivated.


> Do not attempt to check AGPL-licensed code into google3

What is this "google3" thing? Is this the name of their monorepo?


Yes.


How do you even remember a thread from last year? I'm always surprised by your vast knowledge required for moderation. Possibly with a help of some internal search tools?


Oh yes. But not so internal: https://hn.algolia.com.

I do have some keyboard shortcuts in my moderation client software that help me bring up these search pages quickly.


I'm in a similar boat with you, so I have a few questions:

1. Why did you switch? Are you satisfied with the new experience?

2. Why was some of the history needed to be squashed? Were there any technical concerns?


1a) I wasn't involved in the initial decision to switch, but gather it came down to expense and some unsupported perforce-related tooling.

1b) Absolutely! The switch from p4 could've been smoother, but git and access to the associated ecosystem is far ahead of Perforce for everything except storing binaries. We've been abusing p4 by storing electronics CAD files and similar, and do need to put some effort in to a new solution there. For source code, git (and the ecosystem of modern tooling it gives access to) is a huge improvement over p4.

A particular improvement has to do with network latency. Working from halfway around the world, I notice that p4 operations involving many files are very much slower than the similar git operations.

2a) I wasn't involved, but the particular project mentioned above had moved within the depot at some point, so guess the conversion tool that was used to move it couldn't manage that move. I intend to rebuild the older history in git, then see how viable it is to use git-replace to stick it on the beginning of what we have now.

2b) Yes. However, nontechnical concerns dominated conversation by far - to the detriment of the technical concerns.


But it doesn't have to be VBA. Any language with a decent OLE Automation library would suffice. I've been using Python to automate my Excel files, and am really happy about that. I tried to learn VBA, but could not adapt to its archaic syntax and the clunky VBA editor that comes with Excel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: