> likely in order to have those paths easier to copy and paste, but it does so only if !isatty().
I thought the reason was to aid shell scripts that assumed no whitespaces in file names, wasn't it? Also, I believe I've seen single quotes when using `ls` on a terminal, so the behavior is not only for `!isatty()`.
It wouldn't help shell scripts that assumed no whitespaces in file names anyway. You would get something like "'a", "b'", "c", "d" tokens which make no sense anyway.
The only sane way to iterate over files that I'm aware of is to use something like
find . -type f -print0 | while read -d '' fname; do echo "fname: '$fname'"; done
All this is horrible incidental complexity stemming from absolute madhouse insane splitting and substitution rules in shell languages.
Splitting on spaces, expanding asterisks, shell scripting is a minefield. You would basically never want any splitting to happen. Applications like reading a CSV by splitting using IFS are dirty hacks, that break with the slightest additional file parsing complexity (escapes, quoting etc).
In Python you can just do `os.listdir('.')` and it will actually do what you want. No 5 layers of "oh, actually" and "yes, but what if", which inevitably happen in threads discussing the simplest shell operations. It just works as intended. If you only want the files and not the directories, you can do `filter(os.path.isfile, os.listdir('.'))`.
There is no reason to use shell scripts for anything more complex than a few lines or for one-off interactive work.
while read -d '' fname
do
echo "fname: $fname"
done <<<"$(find . -type f -print0)"
Because if you wanted to export a variable from the loop body, it wouldn't work in your example -- the loop is running in a subshell when you pipe output into it. <<<"$(...)" is not exactly the same thing (it doesn't stream input) but it does allow you to get around this problem. For folks who suggested glob expansion -- that's okay for simple file listings, but find lets you do more complicated ones that can be quite useful. Not to mention that the suggested
for fname in *; ...
Is not actually doing the same thing -- find lists files recursively.
find(1) is super useful for any hierarchical or filtering case, and I use it often, but it can also be arcane and unwieldy. In many circumstances your cwd shell iterator can be simpler:
for fname in *; do echo "$fname"; done
This skips dotfiles, of course, but that’s intentional. The point of dotfiles is to be skipped.
won't this echo "*" if there are no files in the directory?
I always have to look up the 'safe iteration' invocation when iterating files in a directory, because it involves jumping through a few more hoops than is really reasonable.
That’s a bit more robust than using strlen(s)+2, where you have to keep that magic constant 2 in sync with ": ". Moving ": " to a variable and using strlen(s)+strlen(separator) would fix that, though (at the price of speed, unless you’ve a compiler that optimizes that away)
This only works if you're not dealing with Unicode, where the number of bytes, the number of characters, and the width of those characters can all vary.
You don’t need Unicode for that. It also requires you use a monospaced font.
I think the feature predates that and Unicode, though. But even then, it fails if you underline text the way it was done at the time, either by using backspace and underline characters or by using termcap (https://en.wikipedia.org/wiki/Termcap)
Yeah but the difference is that you can use %n wherever you need it in the format string. Depending on what arguments come after it figuring out the length of the printed arguments might not be trivial whereas with printf, it already has to keep a count of it during execution in order to return it at the end so it's easy to add support for it.
I seem to recall using it to auto-adjust column widths.
And we did something with it involving string translations. Since we didn't control the translated format string we couldn't just count the characters in the source code.
But it's been a really long time and I don't recall the details.
The first time I experienced such a feeling was when Visual Studio .NET came out. Compared to the previous version, Visual Studio 6, the dialogs and wizards in VS.NET gave me a "web-like" feeling. I guess it was one of the first attemps to use the web technologies to make a desktop app. Along with the Active Desktop feature, they were truly ahead of time.
However, that doesn't mean I loved them. VS.NET was slow as hell and I believe it was one of the reasons why VS6 lasted for so long.
It's strange nobody mentioned NX (NoMachine). When I tried it ~10 years ago, it was much faster than VNC, nearly identical to RDP. Why isn't it more popular?
NX was fine in the 3.5.x series, but the client is wonkish from 4.x (and out?). X2Go is based on the same (open-source) underlaying library, and seems to be better. At least in my opinion, and maybe in others here too since nobody apparently mention NX anymore.
When I tried it last year it was a pretty horrible experience because it installed client and server on Windows and macOS when I just wanted the client. I had to go in and kill some services for the server part. Having to do stuff like that for security reasons is a bit of a turn off.
The actual performance of NX was good, so I'll give them that. I think other people having licencing concerns, but I'm fine with it.
This problem also seems to stem from the fact that Zoom has been used primarily in the corporate settings until now, which kinda validates their claim. Definitely not ideal, but understandable.
Companies that are concerned about this can set up SSO authentication with zoom I believe. So once the user is removed from the company’s directory server they wouldn’t have access to the zoom address list either.
I'm not dismissing the overall security point, but this seems like a pretty weak attack vector. If your company is routinely not deactivating accounts associated with your domain as part of your offboarding, being able to see e-mails and pictures of your employees is not your biggest problem.
How do you even remember a thread from last year? I'm always surprised by your vast knowledge required for moderation. Possibly with a help of some internal search tools?
1a) I wasn't involved in the initial decision to switch, but gather it came down to expense and some unsupported perforce-related tooling.
1b) Absolutely! The switch from p4 could've been smoother, but git and access to the associated ecosystem is far ahead of Perforce for everything except storing binaries. We've been abusing p4 by storing electronics CAD files and similar, and do need to put some effort in to a new solution there. For source code, git (and the ecosystem of modern tooling it gives access to) is a huge improvement over p4.
A particular improvement has to do with network latency. Working from halfway around the world, I notice that p4 operations involving many files are very much slower than the similar git operations.
2a) I wasn't involved, but the particular project mentioned above had moved within the depot at some point, so guess the conversion tool that was used to move it couldn't manage that move. I intend to rebuild the older history in git, then see how viable it is to use git-replace to stick it on the beginning of what we have now.
2b) Yes. However, nontechnical concerns dominated conversation by far - to the detriment of the technical concerns.
But it doesn't have to be VBA. Any language with a decent OLE Automation library would suffice. I've been using Python to automate my Excel files, and am really happy about that. I tried to learn VBA, but could not adapt to its archaic syntax and the clunky VBA editor that comes with Excel.
I thought the reason was to aid shell scripts that assumed no whitespaces in file names, wasn't it? Also, I believe I've seen single quotes when using `ls` on a terminal, so the behavior is not only for `!isatty()`.