I'm a CS professor at an undergraduate college. I'm currently teaching a class on AI coding for non-programmers and also using Claude Code for all the projects in my upper level courses. I'm also planning to update our first course in the fall to include an intro to agentic programming.
Here are some practical suggestions based on what I've seen with my students. To summarize: AI programming de-emphasizes the specifics of languages and frameworks but rewards having a good knowledge of systems and a careful, structured development process.
1. Do learn some by-hand programming in a standard language like Python. Even if you aren't looking at the AI-generated code, this will teach you the building blocks of software design - things like functions, classes, files, and data types - which will help you design more complex applications. The ability to look at a program and reason about what it's doing is a key skill.
2. The terminal environment if you aren't already using it. This will unlock additional levels of control and help you understand how agents use tools.
3. Architecture of the applications you're creating. If you're making web apps, for example, learn about the front-end and back-end, how they exchange information, how a back-end database works, etc. The key is not the low-level details of those things, but how an application is divided into parts that exchange data with each other. This knowledge helps you move from a general concept for an application to an actual design.
4. Related to that point: the concept of encapsulation in software design. This is the idea that each part of your system should be self-contained and exchange information with other parts through a well-defined interface. If a component is encapsulated, you can change its internal details without messing with the rest of the system. This is important for AI, because it allows you to carefully control the targets of your generations.
5. Specs-driven development. This is the evolution of vibe coding and is the main approach I teach now. Chat about the problem, then develop a detailed spec that describes the desired behavior. Refine that into a system design and detailed step-by-step task list with tests for each step. Working with the AI to compare design options is a great learning tool.
6. You don't need to learn much about algorithms unless you want to. Models are very strong at choosing and implementing all of the standard algorithms. Let these emerge naturally as you work on interesting problems.
We're building it out as the semester goes on. Right now, it's mostly the in-class practice activities we're working through each day. This is the first time I've taught this version of the class so it's pretty experimental.
Here's the complete version of my intro class with notes, labs, and projects:
Robson's argument is that it isn't a trig table in the modern sense and was probably constructed as a teacher's aide for completing-the-square problems that show up in Babylonian mathematics. Other examples of teaching-related tablets are known to exist.
On a quick scan, it looks like the Wildberger paper cites Robson's and accepts the relation to the completing-the-square problem, but argues that the tablet's numbers are too complex to have been practical for teaching.
We use ARM in our computer organization classes. It's more accessible than x86 and allows you to get a feel for the important concepts of assembly: register-to-register operations, conditional branching, and how the stack is used to manage function calls and returns.
I like the CPUlator as a platform. It lets you step through the program one instruction at a time and observe all of the registers and memory locations.
This piece by Simon Willison is a good overview. Covers setting expectations, having conversations about the project, context management, and then building an example using Claude Code.
To learn a folk style you really need a mentor that can connect you to the oral tradition. Rather than scales and technical exercises, focus on learning rhythm and complete pieces. If there is a flamenco dance school in your area, see if they offer guitar lessons and the opportunity to accompany dancers.
Always remember: The notes are not necessarily the most important thing.
That said, there's a lot of overlap between flamenco and traditional classical guitar, so learning classical pieces (particularly by Spanish composers) will help build your fingerstyle technique. Solo Guitar Playing by Frederick Noad was the book I used for classical practice when I was younger.
Also check out pseudo-flamenco pieces that have been recorded by rock and country guitarists, like "Mood for a Day" and "Malagueña", and smooth flamenco crossover artists like Ottmar Liebert and Jesse Cook. You might find these too diluted from "real" flamenco, but they can be another entry point for building up your playing.
I'm a professor at an undergraduate college. I've supervised honors thesis projects and published papers and posters with students. You've got two main options for doing research as an undergrad:
1. Join an existing research group as an assistant. This is more likely at a big university that emphasizes scholarship and publications. If you take this route, you'll probably have a small part of a larger project and your direct supervisor might be a postdoc or grad student.
2. Work on a complete project, either as an independent study or as an honors-level thesis. This is more common at an undergrad college like mine.
The advantage of (1) is that you'll be exposed to a higher level of research and get a better feel for what grad school is like. The advantage of (2) is that you get to take on the entire research project, including the lit review and writing.
In either case, the professor's research focus will play a big role in the topic of the project. The best advisor is one that you have a good relationship with and who has experience incorporating undergrads into their work. Prioritize that more so than the specific research topic. Think carefully about the scope of the project; a focused project within the supervisor's area of expertise is more likely to succeed. Trying to make up your own topic is probably a bad idea.
I have a few practical tips from previous thesis projects I've supervised:
- If you're doing a senior project, remember that you only have ~8 months, which has to include all the pre-research and writing. You need to be making progress every week. If you get stuck, seek help so you can get unstuck as quickly as possible.
- Schedule time for your research every day like a class. That time is blocked off and you WILL NOT schedule anything else during it. Work in brief, regular sessions. Don't binge.
- Keep backups of all your work, notes, and drafts.
- You may have to do a lit review, but don't get bogged down on it. I frequently see issues with students who try to read "every" paper and lose time that would be better spent on their actual research. You should have 2-3 key papers (ideally identified by your professor) that you work through in detail. You may add other papers, but focus your reading on quickly extracting key points and the context of the paper -- this is an important research skill to develop early. Set an aggressive limit on the size and scope of the lit review so you can finish it and move on to other topics.
- Be careful about projects that require hardware or complex system setups. You can easily lose a lot of time trying to get things to work.
- You might be able to publish something, but don't get hung up on that as an outcome. Posters are a great result for an undergrad project.
- The research question is the driver of the project. Make sure you clearly understand what you want to learn and how the design of your experiments/analysis relates to its answer. Some students fall into the trap of doing "research" that's more of a summary of an area, rather than an original investigation. Again, start with a narrow, carefully-scoped question; you can always broaden the scope if you have time.
I wrote a document for my own students on the thesis research process, suggested timelines, and specific writing tips for the sections of the final paper:
I appreciate it, this is kind of what I was thinking as well, option one you mentioned might not be available for me, at least from approaching faculty members in my college, so the most likely approach I'm taking is the second one.
With that I was wondering, how do I know if I have a novel idea (other than searching common computer science conferences and papers for researchers who may have done it), what can I do to be sure.
I wouldn't worry too much about coming up with a "novel" idea on your own. Your advisor should help you select and refine a topic. That's a major part of their role.
Published research is more like a conversation among its participants. There's a stream of thought and continuity that connects each paper to its predecessors. Ideas come out of engaging with the conversation and thinking about new directions and open questions. One of my advisors used to talk about "research taste" -- the process of learning what good research looks like and how to choose topics, which develops over time through exposure to the field.
I'd encourage you, at this stage, to just focus on defining your interests. If you're interested in bluetooth security, for example, why is that? What do you find engaging about that topic? Then you can build from there: who's written about that and what results have they produced? Are there good survey papers about the current state of the art? What are the key subfields and their main questions?
You could think of this as "pre-research" -- getting oriented toward an area and building background knowledge. Let it be driven by your curiosity. Find a thread that seems promising and pull on it for a little while. Use tools like Deep Research for help, but you still want to read the key papers.
A good undergrad project is often a tweak of an existing result. I really like projects that use a well-defined, standard methodology, which allows the student to focus on developing research question and the work of data collection, analysis, and writing -- without having to design the entire process from scratch. If you find a paper that you like, think about keeping the same basic approach, but modifying the research question to explore a different angle on the topic. Conclusions will often suggest open questions for further work.
He writes about incentives since the 1990s that have pushed artists to shy away from making bold aesthetic choices that might seem dated a few years later.
The result is more stability and a longer shelf-life for culture, but less experimentation and fewer ways for new styles to break out.
I'm a professor at a small college. I teach intro programming most semesters and we're now moving to using tools like Cursor with no restrictions in upper-level courses.
"How do students learn to code nowadays?" - I think about this pretty much all the time.
In my intro class, the two main goals are to learn about structured programming (using loops, functions, etc.) and build a mental model of how programs execute. Students should be able to look at a piece of code and reason through what it does. I've moved most of the traditional homework problems into class and lab time, so I can observe the students coding without using AI. The out-of-class projects are now bigger and more creative and include specific steps to teach students how to use AI collaboratively.
My upper-level students are now doing more ambitious and challenging projects. What we've seen is that AI moves the difficulty of programming away from remembering details of languages or frameworks, but rewards having a careful, structured development process:
- Thinking hard and chatting about the problem and the changes you need to implement before doing anything
- Keeping components encapsulated and thinking about interfaces
- Controlling the scope of your changes; current AIs work best at the function or class level
- Testing and validation
- Good manual debugging skills; you can't rely on AI to fix everything for you
- General system knowledge: networking, OS, data formats, databases
One of my key theories is that AI might lower the value of "computer science" as a standalone major, but will lead to a lot more coding across fields that currently don't use it. The intersection of "not a traditional engineer" and "can work with AI to solve problems with code" is going to be an emerging skill set that will change a lot of disciplines.
I'm a tenured CS professor at a liberal arts college. I teach the entire curriculum, from intro programming up to senior-level electives.
I'm currently teaching a "Programming with AI" course where we're using Cursor with no restrictions. I now think "learning programming" as we've traditionally conceived it is toast. Core CS projects, the kind that would have been at the heart of the curriculum, take students maybe 30 minutes to do with modern tools.
Previously, the hard part of learning programming was developing the skill of putting code statements in the correct logical structure, and building that up from small programs of a few lines, to functions/classes, and eventually to larger programs. That happened in parallel with building the knowledge background of systems, libraries, algorithms, databases, etc. that you had to draw on to write complex applications.
The core work of the middle part of the CS major - building skill by writing progressively more complex functions and classes - can be largely automated at this point. You've still got to learn things, but the emphasis shifts. Details about libraries and frameworks - implementation at the level of functions and classes - are deemphasized. The scope of what my students can do is bigger, so they're dealing with the challenges of designing and debugging larger applications.
For new programmers the most important thing to learn is a mental model of how programs execute: Can you look at a small program and understand what it does? This is a prerequisite to generating bigger programs with AI. Students also need to learn to use AI collaboratively, to think through a problem like a pair programmer, rather than expecting to get complete one-shot generations.
I redesigned our intro course to move most of our core skills practice (what would previously have been homework problems) into class time. The out-of-class projects are now bigger and more ambitious, but also give students specific steps and prompts for using AI. A major point of the projects is agency. I like projects where there isn't one right answer, but instead students have to set a vision and then iterate on it.
Beyond the first course, we now need to spend more time on "software engineering" and craftsmanship:
- Clarifying requirements
- Reasoning about the design of a larger program with many parts
- Communication between parts of the application: DB design, APIs
- Debugging intuition, why is this thing breaking?
- Testing
- Working with bigger codebases
These things have always been around, but we often didn't teach them until upper-level courses. Early-level projects weren't complex enough to require serious craftsmanship. AI lets us do more ambitious things earlier in the curriculum - we should be looking for ways to raise our standards and continue challenging students.
For new programmers, I would encourage learning how to write standard single-file programs (with variables, loops, functions, lists, etc.) without using AI. Then bring in AI as a partner to write larger applications with specific libraries. Focus on raising the scale of your programs to the point where you can't immediately one-shot them, and then use the difficulties you encounter to think about design issues. Gradually build your knowledge base of systems and algorithms, but don't obsess about memorizing implementation details right away. Build end-to-end applications that do things; LeetCode problems have always been terrible.
For my fellow professors: You should start talking about how to redesign your courses and curriculum. That requires taking some guardrails off so you can see what students can really do with modern AI tools.
I have three sons in the 7-12 range, and I'm a professor at an undergraduate college that's done a lot on teaching with AI.
We've let our kids play with LLMs by having conversations in voice mode and generating images. The youngest one likes doing this, but it's a novelty, not something that he does all the time.
For academic work, we've had success using Perplexity (with parental guidance) for the older kids' projects that require Internet research. The ability to get an overview of a topic at a moderate level of complexity with links to other sources is beneficial. This isn't a substitute for doing in-depth research in the library or with actual peer-reviewed articles, but they're not yet at that level of depth.
At the college level, the most important lesson we're trying to teach is using LLMs as a source of ideas, suggestions, and feedback to advance your work, rather than as a tool for generating finished work. I often phrase this as "collaborating vs. delegating". I want students to think critically about their ideas and repeatedly iterate with LLMs in the loop to help solve the creative problems they encounter - but without outsourcing their own vision for the project.
My colleagues are seeing good results across multiple disciplines using LLMs for topic development and pre-writing, so I'd encourage leaning into that role, as opposed to jumping straight into text generation.
We've also learned that students benefit from a clear process with specific example prompts. Using AI well requires developing critical thinking and self-reflective skills, so there's a process of maturing that comes with time and exposure.
If you're interested, here's an example research assignment I've used in my own classes with some specific prompts and suggestions for different phases of the writing process:
Here are some practical suggestions based on what I've seen with my students. To summarize: AI programming de-emphasizes the specifics of languages and frameworks but rewards having a good knowledge of systems and a careful, structured development process.
1. Do learn some by-hand programming in a standard language like Python. Even if you aren't looking at the AI-generated code, this will teach you the building blocks of software design - things like functions, classes, files, and data types - which will help you design more complex applications. The ability to look at a program and reason about what it's doing is a key skill.
2. The terminal environment if you aren't already using it. This will unlock additional levels of control and help you understand how agents use tools.
3. Architecture of the applications you're creating. If you're making web apps, for example, learn about the front-end and back-end, how they exchange information, how a back-end database works, etc. The key is not the low-level details of those things, but how an application is divided into parts that exchange data with each other. This knowledge helps you move from a general concept for an application to an actual design.
4. Related to that point: the concept of encapsulation in software design. This is the idea that each part of your system should be self-contained and exchange information with other parts through a well-defined interface. If a component is encapsulated, you can change its internal details without messing with the rest of the system. This is important for AI, because it allows you to carefully control the targets of your generations.
5. Specs-driven development. This is the evolution of vibe coding and is the main approach I teach now. Chat about the problem, then develop a detailed spec that describes the desired behavior. Refine that into a system design and detailed step-by-step task list with tests for each step. Working with the AI to compare design options is a great learning tool.
6. You don't need to learn much about algorithms unless you want to. Models are very strong at choosing and implementing all of the standard algorithms. Let these emerge naturally as you work on interesting problems.