In the lab at McGill University in Montreal, Canada, where I am a PhD student, my colleagues and I study the optical and electronic properties of organic molecules. Most of this work is based on chemical synthesis, but we complement it with the occasional use of software to predict and rationalize molecules' properties.
Recently, I wanted to understand how a specific reaction mechanism could explain why some molecules are much more stable than others that seem very similar. As I took on this project, I realized that my computational skills were not up to the task: I would need to learn to code so I could manage and process the large volumes of data that I was rapidly producing.
Fortunately, high-quality educational resources have never been more plentiful or accessible. I had already casually tried coding in my free time, relying mostly on free apps, YouTube videos and books. A colleague suggested that I try using artificial intelligence (AI) instead. I was amazed at how useful I found the free tools based on large language models (LLMs), such as ChatGPT and Claude, for self-teaching, and how dramatically they accelerated my learning.
I had experimented with ChatGPT before, but this was the first time I found it valuable enough to use in my work and daily life. The results of various other tasks hadn't been sophisticated enough to be useful. For example, when I asked for ideas for reactions that I could try out to synthesize target molecules, it provided methods that I found either obvious or mistaken. When I fed it point-by-point notes to translate into the first draft of a document, I had to edit the prose too much to feel that it had really saved me effort.
I think the reason I find LLMs useful for coding, but unhelpful for many other tasks, is that there's a sweet spot where the information that they produce is most likely to be useful for learning. I've spent years doing chemical synthesis on complex systems, so I'm an expert in the field, and understand the subtleties better than AI might (at least for now). As a result, I was disappointed when consulting AI for insight into difficult reactions.
For coding, self-teaching with AI works well for me because it's much closer to the margins of my existing knowledge and experience. The idea that we learn best when we fit new concepts into what we already know is well established in theories of education. The early-twentieth-century Russian psychologist Lev Vygotsky wrote about the "zone of proximal development", a sweet spot where the learner can enjoy the guidance of a "more knowledgeable other".
I was already aware of the basic concepts and syntax of coding, having completed a couple of small introductory courses. This basic knowledge provided valuable foundations, which would have been difficult to establish without direction. I could then enjoy the guidance of my 'more knowledgeable other'.
One of the greatest benefits of AI is that it enables a style of education that is usually very resource intensive -- rapid question-and-answer. In most educational settings, it's impractical to have a teacher on hand to answer every question or produce large volumes of personalized content. But the inherently conversational mode of a tool based on an LLM means that it replicates the speed and ease of a personal tutor.
It also streamlines the process of seeking information. Even if you are learning through a course, such as a set of videos or a book, it can take a lot of time and effort to find the answers to specific questions. For example, your question might be answered in an article, but with an explanation not suited to your skill level, so you have to spend more time searching for context or definitions. Or you might post on a forum asking for help and spend hours or days waiting for an answer.
With an LLM, you can tune the tone or skill level of an explanation, or ask follow-up questions about specific terms or concepts. The short feedback time helped me to enter a flow state, which created a sense of control and kept me motivated.
I was initially surprised to learn that most of the popular free LLM-based tools, such as ChatGPT, Claude and DeepSeek, can produce adequate code for simple tasks from a written input. This created a valuable opportunity to work on higher-level coding skills, including structure, design and debugging, which are not usually explored when you're learning the basics.
Most programmers will tell you that it is not specific knowledge of syntax and functions that makes a good coder. It's bigger-picture, higher-order critical-thinking skills. Fortunately, most scientists and researchers have already developed many relevant skills: logical and algorithmic thinking, a sense of organization and long-range order, attention to detail and anticipation of possible failures.
Much conventional education is structured around building a foundation of the basics before bringing this information into context. This is obviously important: it would be impossible to read without knowing the alphabet or to play chess without knowing how the pieces move. But most of us have experienced, perhaps during our school years, a sense of boredom or futility while practising basic skills. Adding context and the holistic view of a field early on can increase motivation and make the basics easier to learn, because you understand why they are important.
When I started coding with AI, I experimented by identifying a simple task I wanted a script to perform, writing it out in detailed plain English, then giving this prompt to the tool and studying the code it produced. For example, I had a batch of several dozen files that contained a specific piece of data, among other details, that I wanted to compare. I wrote a detailed outline of how I wanted a Python script to open all the files in a specified directory, search each one for the required data, collect the values and export them into a spreadsheet.
This simple script provided a convenient case study for learning many basic coding topics, such as regular expressions, defining functions and the syntax of important data-analysis libraries. This also allowed me to exercise my planning and organizational skills, imagine use cases and study code that was relevant to my own goals. When the code didn't work, it offered me the opportunity to search for bugs and think about how to solve them. It was very motivating to transition from writing single lines of generic code to working with real examples that were important to me.
I have a growing library of scripts I use every day, which have shaved dozens of hours off my workload since I started a few months ago. If it wasn't for AI, it would probably have taken me several more months to get to where I am now, and it would have been difficult to justify the time commitment to get this project started in the first place.
For others who are still ambivalent, or reluctant, to use AI tools, I understand. I don't usually rush to use new tech, but in this case I'm glad I kept an open mind and was patient. A willingness to get creative about how a tool might be useful, and to calibrate your expectations, can pay off. And in spite of pervasive fears that AI will take over human tasks and weaken our cognitive skills, it has real potential to help us learn the skills we find most crucial to meaningful work.