The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Mon, 21 Oct, 4:03 PM UTC
3 Sources
[1]
How ChatGPT scanned 170k lines of code in seconds, saving me hours of work
Not sure how to best apply artificial intelligence (AI) to your unique, specialized needs? You've come to the right place. We'll go over how you can use a tool like ChatGPT to solve complex problems quickly, so long as you have the right prompts and a hint of skepticism. Also: The best AI chatbots of 2024: ChatGPT, Copilot, and worthy alternatives Our context for this lesson is 3D printing. A special test in 3D printing called a 3DBenchy checks printer performance by helping users test speed and various print-quality measures, and it takes most printers an hour or two to print out. I recently tested a new printer that's supposed to be faster than many others. On this printer, the Benchy took 42 minutes, while on other 3D printers in the Fab Lab, it took 60 to 70 minutes. But here's the thing: the test version provided by the company that makes the printer took 16 minutes. That's a heck of a difference. Also: Asana launches a no-code tool for designing AI agents - aka your new 'teammates' 3D printers are controlled with G-code, a program custom-generated by a tool called a slicer that controls how the printer moves its print head and print platform, heats up, and feeds and retracts molten filament. The pre-sliced G-code provided by the factory for the printer I was testing resulted in a 16-minute print. The G-code I generated using the company's slicer resulted in a 42-minute print. I wanted to know why. Unfortunately, no one on the company's support team could answer my question. Despite numerous tries, I couldn't get an answer about what slicer settings to change to get the G-code I produced using their slicer to perform as well as the G-code generated using their slicer. Also: How to use ChatGPT to write code: What it does well and what it doesn't After many web searches and reading posts from frustrated Reddit posts, it was clear that other customers had the same problem. Here's a machine capable of more than double the performance, yet none of us could reproduce that performance successfully. This is where ChatGPT comes into the picture. G-code consists of thousands of lines that look like this: Together, both Benchy G-code files had 170,000+ lines of code. I didn't intend to spend a Saturday afternoon sifting through that stuff manually. But I thought, perhaps, AI could help. Also: I took my Ray-Ban Meta smart glasses fly fishing, and they beat GoPro in several surprising ways I had the G-code I generated using the slicer. I could also export and save the G-code provided by the factory. Using ChatGPT Plus, I fed both files into the AI. I started by confirming ChatGPT could read the files. After I uploaded each file, I asked: Can you read this? ChatGPT confirmed, stating, "I can read the contents of the file. It appears to be a G-code file, typically used to control 3D printers." That was a good start. To ensure we were clear on which file was which, I gave ChatGPT some labels for the files: Let's call the first file uploaded "regular print" and the second file uploaded "fast print". Okay? Other than naming one of the files "fast print", I gave ChatGPT no indication of what I was looking for. Even so, the bot identified that one print had higher print speeds, although the temperature settings were the same. At this point, ChatGPT started to annoy me. Instead of giving me details from the code I provided, it speculated. The AI used phrases containing "likely," "may," and "might" to describe why the print was faster. Also: Today's AI ecosystem is unsustainable for most everyone but Nvidia, warns top scholar But I had given it G-code files that described exactly what the printer was doing, so I wanted an exact answer about what the printer was doing. As is often the case with ChatGPT, the conversation was a lot like talking to a brilliant grad student who is somewhat stubborn and uncooperative. I finally landed on this prompt, which teased out workable answers: The G-code provided in both files is the only thing that is different for these prints. Using solely the G-code provided as comparisons, what slicer settings would be different? Don't speculate on what other settings might be. Base your analysis only on the code provided. ChatGPT identified three key factors: Also: Technologist Bruce Schneier on security, society and why we need 'public AI' models That result was interesting. However, I wanted to know whether the company hand-optimized the G-code or generated it directly in the slicer. So, I asked ChatGPT: Can you tell if fast print has been hand-coded or was generated by a slicer? Perhaps look for inconsistent commands or non-standard comments. The AI responded with three interesting considerations: What these results tell me is that it is probably possible for users to modify their slicer settings to get similar performance. We've had some very active comments for this article. For the most part, I've gone in and answered questions as they came up. I encourage you to visit the comments to participate and read what other readers have to say on this topic. Also: Have a Windows problem that you just can't fix? Try this ultimate troubleshooting trick Here's a quick list of some thoughts that the comments inspired: I don't always get the chance to respond to comments, but I try. Sometimes, people post days, weeks, or even months after the articles go up and I've moved onto other articles. But I always welcome reader comments. Because most ZDNET readers are pros, the comments are often rich with useful (if occasionally painful to read) information. I've learned a lot from ZDNET comments, and I'm sure you will, too. We've learned that ChatGPT understands G-code. That's unsurprising because, in my earliest tests, we learned that ChatGPT has a fairly good command of even the most obscure programming languages. We also learned that ChatGPT can sift through and compare 170,000+ lines of machine instructions and reach actionable conclusions in seconds. Also: Perplexity AI's new tool makes researching the stock market 'delightful'. Here's how Finally, we learned we can use AIs like ChatGPT to explore complex problems from several angles. Not only did ChatGPT explain the vast speed difference between the two files, but it was also able to validate whether or not the factory-provided file had been hand-tweaked. In conclusion, don't accept what the AI tells you as absolute truth. Don't make critical decisions based on its answers. And remember that you sometimes have to negotiate with the AI before it's willing to give you helpful answers. This test is yet another case where I've been able to turn to the AI and find an answer for a very me-specific question without coding in minutes. If you have a question that requires a lot of text or numerical analysis, consider running it by ChatGPT or one of the other AIs. You might get a useful answer in minutes. Also: I tested 7 AI content detectors - they're getting dramatically better at identifying plagiarism Writing this article about the problem took me a few hours. The actual analysis process, from start to finish, took me less than 10 minutes. That's some serious productivity right there.
[2]
5 ways I actually used ChatGPT this year to improve my life
To my surprise, the AI chatbot has actually proven quite beneficial in ways unexpected. When ChatGPT first landed on the scene, I was terrified. It was lightyears ahead of virtual assistants like Siri, Google, and Alexa, and it seemed like it was going to render my job obsolete -- maybe even all jobs. Fortunately, we now know that ChatGPT is really just a glorified chatbot and it's far from ready to replace real-world workers, let alone take over the world. Some have even swung the opposite direction, claiming these AI chatbots are novelty gimmicks and basically useless. I wouldn't go that far, though. In fact, when I tried using ChatGPT in my day-to-day life, I was surprised by how beneficial it was. It's a tool -- and like any tool, you have to know how to use it to get any value out of it. Here are some real things I've done this year using ChatGPT and the various ways it has actually improved my life. Is ChatGPT itself good at coding? That depends. If you were to ask most programmers if ChatGPT could do their jobs for them, they'd emphasize how far away we are from that reality just yet. But as a tool for guidance and aid in understanding syntax, concepts, and other programming-related things? It's not bad at all. So while ChatGPT developers have been working on their own obsolescence, I've been using ChatGPT to help me learn to code. I've wanted to make a game for years, but it wasn't until 2024 that I finally sat down and took steps to make it happen. I'd previously learned BASIC about 30 years ago and I'd dabbled in Python and Flash's ActionScript back in the '00s, but realistically I was a going into this as a complete beginner -- and that's where ChatGPT proved genuinely useful. After I told it what I wanted to develop, it helped me choose a game engine in Game Maker. Then, after I made a few tutorial games, I started working on my own with ChatGPT guiding me along the way. If I didn't know how to do something in Game Maker, it pointed me in the right direction. When I didn't know the differences between an Array and a DS Map, it explained them to me. When I made basic syntax errors that I couldn't spot, ChatGPT found them in seconds. Of course, I also tried getting ChatGPT to write code for me, but that's where it struggled. It often implemented things very inefficiently, or it had too much commenting, or it just didn't work properly. And even when it did work, I -- as a complete novice -- couldn't understand how it worked, so when something broke, I was never able to fix it. Now, months into my coding journey, I don't use ChatGPT as much because I can trudge my way through most problems. But when I can't conceptualize how to create something (because I just don't have the programming experience to understand how it could be done), ChatGPT is still super useful. I'll ask it to give me three ways to approach a problem, ranked by efficiency or modularity, depending on what I'm doing, and that way I still make all the big calls but also have my little chatbot helper do some of the basic grunt work for me. I love tabletop roleplaying (TTRPGs). With a good group, it's one of my favorite activities. Call of Cthulhu was my first real love, but I've got a near-decade-old Dungeons & Dragons campaign I'm still part of, and I've also played a few one-shots in different systems over the years. But now I'm keen to run something new and it's been a while since I GM'd a game. Honestly, I'm a bit nervous and feel out of practice. What can I do other than grit my teeth and just see how it goes? ChatGPT to the rescue! I've always practiced roleplaying out loud before sessions, playing around with my characters' voices with different accents or inflections and organically coming up with backstories just by seeing what I can manage to pull out of my hat via improvisation. But with ChatGPT, I can do better -- by enacting real back-and-forths with imagined players -- and it's really quite effective. With ChatGPT's Advanced Voice mode, you can hold fluid conversations with the chatbot and it does a great job of inhabiting any characters or personas you give it. It can't change its voice mid-chat, but it can play multiple characters and give them different vocal styles and ticks. And while I haven't tried this next idea myself yet, you could even have ChatGPT play a character in your game during a session. This could be great when someone flakes at the last minute and leaves a hole in your party, or if no one wants to play a particular role. ChatGPT isn't perfect, but it could fill in as needed. Related: New report shows the truth of how people use ChatGPT I am so grateful to be alive at a time when my young kids (who are 4 and 6 years old) can ask me questions for which I don't know the answers to, and I can simply say "I don't know, but we can look it up!" before pulling out my phone and finding the answers within seconds. What an improvement to visiting the library or digging out an encyclopedia. But there's something lost in the modern process of information discovery. It's little more than me staring at my phone screen for 30 seconds while they wait (im)patiently by my side. And because I intentionally try to limit the amount of time they see me eyeballing my little black mirror, this process feels doubly off. That's why I appreciate ChatGPT's Advanced Voice mode, which can be a much more fun and engaging way to answer our questions. We all sit and listen for the answer together -- it's not just me looking it up and imparting knowledge, but they get to discover with me. It's just more interesting and surprising that way. It also lets my kids practice speaking clearly as they articulate their questions. With this, you do have to bear in mind that OpenAI is likely harvesting your children's voice data and the questions asked for algorithmic training, so I wouldn't use this for something sensitive. Plus, there's the potential for answers to go over their heads or to be flat-out wrong, so I wouldn't do this when I want to guarantee accurate answers. Then again, that could be a good opportunity to teach them to always check their sources when it comes to information. Like many people, I suffer with anxiety because [gestures at the state of the world], and I have to manage it on a day-to-day basis. I have sessions with a therapist, I practice mindfulness, I exercise regularly, I watch my diet, and I try to limit doomscrolling whenever I can. But that's not to say I have anxiety completely under control. It's an ever-present threat to my productivity, mood, and gut health, so I'm always keen to try out new ways of managing it. One of the things I've been trying has been to "talk it out" with ChatGPT. I've asked it for help with mindfulness, had it coach me through breathing techniques, and even offered up some of my more complicated personal struggles to get another perspective on them. I've found it really helpful that I can choose to chat with ChatGPT through text alone (when I'm not feeling too vocal) or through quiet voice (when I don't want others nearby to hear). The recently implemented Advanced Voice mode has made ChatGPT more nuanced, too, and therapist-like conversations aren't impossible. In fact, ChatGPT can sometimes even sound like it genuinely cares. (Of course, it doesn't actually have any emotions, but it's still effective nonetheless.) It feels a little odd reaching out to an AI for human connection, and there's a very valid concern over privacy when using ChatGPT in this way. But I can say that it definitely works for me, at least in part. Sure, ChatGPT is nowhere near as good as seeing a real-life therapist, and I don't expect ChatGPT's conversational abilities to ever be a one-to-one replacement for the real social experiences we have with fellow humans. But when I'm in a bind and just need a little bit of support right then and there, ChatGPT is a great alternative. One of my recent hyperfixations has been World War II, so I've been enjoying a lot of historical documentaries, videos, and podcasts on the topic over the past several months. As with any major historical event, though, I can't help but imagine a million "What if...?" scenarios that could've arose if minor things hadn't happened as they did. If I were more academically minded and had the time, maybe I'd do some genuine research into such "alternate histories" and write papers that could be of interest to others. In reality, though, it's just a musing in the moment -- and that's where ChatGPT can be a lot of fun. I've asked ChatGPT to come up with alternate gameplans for battles supposing a different general was in charge. What would've happened if Hitler hadn't been obsessed with taking Stalingrad? What would've happened had the Allies just rolled on into Russia after the defeat of Germany? What if Churchill had taken the advice of his cabinet to surrender in 1940? What if the US didn't stop at two nukes? Of course, these are all far too complex to really know the answers to. But if I wanted to get even a rough estimate of the end result on my own, I'd need to do so much historical research and know the topic far better than I do, that it would take weeks or months to approach these questions with even a sense of veracity. ChatGPT can be an immediately accessible, opinionated history buff. It's probably wrong, but there's no way to prove it -- and it's likely to be more accurate than whatever I could come up with myself. More importantly, it's an interesting what-if scenario that I get to explore, all because ChatGPT has the repository of knowledge needed to quickly generate an idea of what could've happened. I suppose you could ask it to think about alternative futures, too, but that feels a little too real for now. I'll stick to having it describe things that couldn't possibly happen. It's much more relaxing.
[3]
I Used ChatGPT to Fix My PC (Here's How It Went)
Recently, my Windows PC stubbornly decided to refuse to boot up, and so I turned to the traditional troubleshooting approach that's served me well for a couple of decades: I typed some of the symptoms into Google to see what came up. There's a wealth of advice out there on the web, millions of forum and Reddit posts asking for PC help, and millions of posts trying to offer solutions to the problem. Depending on what your issue is, it can take some time to find relevant information, but this is an approach that often gets results. However, having tried a variety of fixes suggested by the web at large, plus a few ideas of my own, Windows still wasn't starting up properly. So, I decided to see if generative artificial intelligence could lend a hand -- besides writing poetry and finding jobs, could it also tell me how to get Windows working again? We know AI is trained on vast swathes of the open web, including support forums and Reddit threads. But is it smart enough to summarize and synthesize all of this data into a form that's actually helpful for solving computer problems? My Windows PC is set up with an SSD drive for the operating system and for programs, and an HDD for games and everything else, and it's on most of the time -- so when we had a power cut in our local area, everything turned off instantly and without warning. After that, the SSD with Windows on wouldn't boot up as normal. On start up, the PC displays a blue screen with the message "UNMOUNTABLE_BOOT_VOLUME", so that's our first clue there. Using a Windows 11 recovery USB drive, I'm able to access the Start-up repair utility, but this simply displays the message that it "couldn't repair your PC." The next option is the command prompt, and from there I can see the files and folders on both my SSD and HDD drives -- suggesting the data is there, but the drives can't be booted up. Then I got into advice from the web, including the command prompt lines "sfc /scannow" to scan for and fix errors (this told me a fix had been made, but it didn't make any difference), and a series of "bootrec" commands -- "/fixmbr," "/fixboot," "/scanos," and "/rebuildbcd," which either all completed successfully or told me access to the SSD was denied. The old faithful "chkdsk" command wouldn't run either, and threw up a write protected message too. At this stage it seemed the power cut had messed up the SSD somehow and put it into a write protect mode -- something that seems fairly common. While the advice attached to online posts that reference similar issues is mostly just to replace the drive, on the next boot I got a different blue screen message: "The Boot Configuration Data for your PC is missing or contains errors." The blue screen recommended reinstalling Windows, so as all my data is backed up, before I gave up on the SSD I tried putting a fresh copy of the operating system in place, from an attached USB drive. However, when it came to the list of drives I was able to install Windows 11 on, the SSD wasn't included. So I was most likely looking at a borked SSD drive -- even though the files on it are correctly listed when I viewed it through the command prompt interface. My last resort was the powers of generative AI, and while I wasn't optimistic about my chances of successfully fixing the problem at this point, I thought it was worth a try. For the repair job, I called on ChatGPT's o1-preview model: It is, OpenAI says, the best model for advanced reasoning, and my feeling was I needed all the advanced reasoning I could get. At the moment, however, you need a ChatGPT Plus subscription to access o1-preview, which will set you back $20 a month. Having carefully typed out the problem in as much detail and with as much context as I could, I let ChatGPT get to work. If you're using the o1-preview model, the responses take longer to show up, but they come with messages on screen like "identifying possible causes" and "diagnosing SSD health." As it sometimes does for testing purposes, ChatGPT first showed me two responses and asked me to choose the best -- which was a little difficult, as I didn't know if either of them were right. The main suggestions I hadn't tried before were to use the Diskpart utility (which couldn't see my SSD), and to remove Bitlocker encryption (which didn't work either). I also got some more generic tips, including backing up data, testing the SSD in another PC, and using whatever diagnostic tools the manufacturer had provided. Where AI bots really have the advantage over a straight web search is in the two-way nature of them: I could ask follow-up questions, float ideas about what had gone wrong, ask for clarification on any point, and tweak my prompts. Most of the time, the responses made sense and were valid (the "chkdsk" and "bootrec" commands came up again), but in the end even ChatGPT had to admit defeat and acknowledge the "strong indicators" of hardware failure. Having already established that I was probably looking at an unfixable SSD problem and replacing the whole drive, it may have been a little unfair to expect ChatGPT to work miracles. However, it did correctly reach the right conclusion (I think), and even offered up some helpful suggestions for preventing the same problem happening again (primarily, an uninterruptible power supply). It feels like a more personalized and helpful troubleshooting option, as long as its output can be trusted. Of course, this is a sample size of one: I'd need to run multiple kinds of PC problems through ChatGPT to see if it was actually useful for computer repair. For now, it doesn't look as though Microsoft or Apple are confident enough in the tech to offer any kind of repair bot -- perhaps because they don't want to be responsible if a bad AI idea leads to significant data loss. While all of the answers I saw made sense logically, no one has yet fixed the generative AI hallucination problem.
Share
Share
Copy Link
ChatGPT demonstrates its versatility in analyzing complex code, aiding in game development, enhancing tabletop roleplaying experiences, and even attempting PC repairs, showcasing both its strengths and limitations in real-world applications.
In a remarkable demonstration of AI capabilities, ChatGPT, OpenAI's large language model, successfully analyzed over 170,000 lines of G-code for 3D printing in mere seconds. This feat, which would have taken hours for a human to accomplish, showcases the AI's potential in streamlining complex technical tasks 1.
The analysis was prompted by a discrepancy in print times for a 3DBenchy test, with the manufacturer's pre-sliced G-code resulting in a 16-minute print, while user-generated code took 42 minutes. ChatGPT identified key differences in print speed, acceleration, and jerk settings, suggesting that users could potentially modify their slicer settings to achieve similar performance 1.
While ChatGPT is not yet capable of replacing human programmers, it has proven to be an invaluable tool for learning and understanding code. A beginner game developer reported using ChatGPT to choose a game engine, explain programming concepts, and provide guidance on problem-solving approaches 2.
The AI's ability to explain syntax, identify errors, and suggest multiple approaches to coding problems has made it a useful companion for novice programmers. However, it's important to note that ChatGPT's code generation capabilities are still limited, often producing inefficient or non-functional code 2.
ChatGPT's Advanced Voice mode has found an unexpected application in the world of tabletop roleplaying games (TTRPGs). Game masters have been using the AI to practice character voices, improvise backstories, and even simulate player interactions 2.
This feature allows for fluid conversations with the chatbot, which can inhabit multiple characters with different vocal styles. Some users have even suggested using ChatGPT to fill in for absent players during gaming sessions, although the effectiveness of this approach remains to be seen 2.
In an attempt to push the boundaries of ChatGPT's problem-solving capabilities, a user sought the AI's help in diagnosing and fixing a non-booting Windows PC. While ChatGPT didn't ultimately solve the hardware issue, it demonstrated a logical approach to troubleshooting 3.
The AI suggested various command-line utilities and diagnostic tools, and even recommended preventative measures for future issues. However, the experience highlighted the current limitations of AI in complex hardware troubleshooting scenarios 3.
These real-world applications of ChatGPT illustrate its versatility and potential impact across various domains. From technical analysis and coding assistance to creative endeavors and problem-solving, the AI has shown promise in augmenting human capabilities.
However, it's crucial to approach AI-generated solutions with a critical eye. As demonstrated in the PC troubleshooting case, ChatGPT's responses, while logical, may not always lead to successful outcomes. Users should view the AI as a tool to enhance their own knowledge and skills rather than a replacement for human expertise 123.
An exploration of how AI tools like ChatGPT can be used to boost programming output and improve research processes, along with tips for responsible and effective use.
6 Sources
6 Sources
A comprehensive look at ChatGPT's development, capabilities, and impact on various industries, including its ability to write code and assist developers.
2 Sources
2 Sources
An exploration of ChatGPT's latest GPT-4o model, highlighting its advanced features and persistent limitations in the evolving landscape of AI technology.
2 Sources
2 Sources
An exploration of AI tools that enhance workplace efficiency, focusing on Grammarly, ChatGPT, and Canva, along with insights into the value of premium AI chatbot subscriptions.
4 Sources
4 Sources
As AI technology advances, it offers new tools for enhancing work productivity. However, its application in creative fields like novel writing raises concerns among authors. This story explores the potential benefits and controversies surrounding AI in various industries.
2 Sources
2 Sources