AMD already has Composable Kernels[1], and supports for example Triton[2]. Then there is also HIP[3], and there are tools to automatically convert from from CUDA to HIP. But since CUDA is the de-facto standard, there is always friction to use something else (unless you need to support also AMD stack).
Making something just CUDA-compatible is non-trivial, and since Nvidia decides its direction and new features then the alternatives would always be lagging behind. Currently there are also major hardware differences between Nvidia and AMD, which may make highly optimized CUDA code inefficient or even buggy.
> The goal of the Hutter Prize is to encourage research in artificial intelligence (AI). The organizers believe that text compression and AI are equivalent problems.
I believe that they believe that and that it _could_ be true. That's far from declaratively stating that they are the same thing, as if there was some sort of evidence and consensus of such a claim.
I don't think the distinction between indoors or outdoors usage is relevant, in the end we are still talking about the volume of drinkable water people use. Whether they use it for drinking, cooking, washing, or watering the lawn.
Nothing stopping you from making up a physical address. As far as Name Cheap is concerned, I live in New York (I do not live in New York). In theory* "they" could revoke your domain, but, in practice my oldest domain is 28 years and still mine.
"In a notebook, you can go back and edit and run individual lines of the notebook without re-running the whole notebook from the start and without re-computing everything that depends on what you just edited."
Isn't this a standard on REPLs as well? You can select the code you wish to run, and press Ctrl+Enter or what ever. I must admit, I've programmed Python for about 10 years in Spyder and VS Code now, but I haven't used notebooks at any point. Just either ad-hoc scripts or actual source files.
My definition of a "notebook" is an ad-hoc script, split into individual "cells" which are typically run as a whole. On my workflow, I just select the code I wish to run. Sometimes it is one expression, one line, 100 lines or 1000 lines depending what I've changed on the script.
> Isn't this a standard on REPLs as well? You can select the code you wish to run, and press Ctrl+Enter or what ever.
Not usually, no. Type `python` at the command prompt - what you get is a REPL. Type `clisp` at the command prompt, or `wish`, or `psql`, or `perl` or even `bash` - those are al REPLs.
Very different to a program that presents an editor, and then lets the user selectively choose which lines/expressions in that editor to run next. For example, type `emacs somefile.sql` in the command prompt. The application that opens is most definitely not a READ-EVAL-PRINT-LOOP.
Why would adding fancy select or cut-and-paste features to a REPL make it not a REPL? Selectively choosing which lines to run is just a convenience to let you not have to type the whole line or set of lines again, it doesn’t really change the base interaction with the interpreter.
> Why would adding fancy select or cut-and-paste features to a REPL make it not a REPL?
For the same reason that adding (working) wings to a car makes it not a car anymore.[1]
I mean, to my mind, when something is satisfying a different primary use-case, then that thing is a different thing.
I'm sure there's some fuzziness in the distinction between "This is a REPL/car and this is a Notebook/plane".
Usually it's very easy to see the distinction - the REPL is waiting for the next command and the next command only while the notebook takes whatever input you give it, determines whether it got a command or content, and reacts appropriately.
[1] Tons of examples, TBH. I don't refer to my computer as my calculator, even though the computer does everything a fancy calculator can do. People don't call motorcycles 'bicycles', even though the motorcycle can go anywhere that a legal bicycle can go. More telling is how people don't call their computer monitor 'TV' and don't call the TV a 'Monitor' even when the same actual item is used for both (i.e. I repurposed an old monitor as a small-screen netflix-box, and now an item that used to be called 'monitor' by wife and kids is called 'TV' by wife and kids).
a flying car with wings is still a car. that's the whole point of it. it can drive you to the airport and drive like like a car. I don't care what you call your computer, it can still do math. people who have their TV hooked up to their computer would more readily refer to it as a monitor. idk, I just think REPLs are kinda shit for interacting with the present state of a kernel (as Jupyter calls them). Jupyters better, but still kinda shit because it could automatically infer the important variables in scope and keep a watch list like a debugger does. And then suggest things to do with them since it's a repl and not an IDE. but the thing is fundamentally they're Read Edit Print Loop interfaces to the computer and its current working state.
Ugh this is a gish gallop of broken straw man analogies. Being able to select and evaluate a single line in a notebook is nothing like adding wings to a car. Fundamentally, selecting a line to evaluate is no different from typing that line again. It’s a shortcut and nothing more, the interaction is still read-eval-print. Note REPL doesn’t even refer to where the input comes from, the point is simply that it handles your input, processes it, displays the result, and then waits for you for more input. This is as opposed to executing a file where the interpreter exits once the execution is completed, and the return value is not automatically printed.
Jupyter Notebook absolutely is a REPL, see my sibling comment above for the WP link describing it as such. It waits for input, then evals the input, the prints the return value, and then loops.
I bought Unihertz Pocket Titan, at least it was different! Quite thick and heavy, but the battery lasts about 6 days even without any power-saving mode.
Since with CUDA you are programming so close to hardware, and hardware (and CUDA itself) has advanced so much, I recommend you to go trough very carefully all the major CUDA versions and see how it has evolved. Well, strictly speaking I'm talking about different versions of "compute capabilities". Of course Wikipedia has a good summary: https://en.wikipedia.org/wiki/CUDA#Version_features_and_spec...
An other point is that you don't need to write any CUDA code to be able to utilize GPU computing. If you need ML models, you have frameworks like PyTorch and Tensorflow. You just need to express your mathematical problem, and the framework will take care of the rest.
Even if you needed to write custom GPU code, you don't need to do it in C anymore! For example you can JIT Python, using Numba or Triton.
Usually writing custom code is only required when:
- You are doing something novel, like PhD level stuff
- You must optimize the ML project for performance and trough-put at interference time
- You need to brute-force solutions (be it crypto-hashes, passwords, NP-complete problems, ...)
My last point to you is that do you want to learn to use these pre-existing frameworks and libraries, or learn to develop them or maybe even create new ones? What ever your answer is, I'd say that the first option is a great stepping-stone to advance to the second one.
I was hoping I could revive my background in lower-level programming, I am not actually interested in creating AI based solutions per se, but rather develop frameworks that allow it. Hence CUDA and algorithmic programming in general.