<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>AI on Spinning Code</title>
    <link>https://spinningcode.org/tags/ai/</link>
    <description>Recent content in AI on Spinning Code</description>
    <generator>Hugo -- 0.152.2</generator>
    <language>en-US</language>
    <lastBuildDate>Sat, 09 Aug 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://spinningcode.org/tags/ai/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Guide to AI in College From A Professor&#39;s Spouse</title>
      <link>https://spinningcode.org/2025/College-and-AI/</link>
      <pubDate>Sat, 09 Aug 2025 00:00:00 +0000</pubDate>
       <guid isPermaLink="false">https://spinningcode.org/2025/College-and-AI/</guid> 
      <description>AI tools are creating change everywhere – especially college campuses. But we need to talk about how we use it.</description>
      <content:encoded><![CDATA[<p>The <a href="/2024/08/more-advice-from-a-college-professors-spouse/">last time I offered up advice to college students</a> <a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence">generative AIs</a> (ChatGPT and friends) were just emerging. Now AI tools are a critical part of work in a lot of fields. College students are using these tools all over the place. So I would like to offer some suggestions and thoughts for those in college who are using, or tempted to use, AI to complete their assignments.</p>
<p>As you are about to see I have concerns about using AI in your education. That doesn&rsquo;t mean I&rsquo;m opposed to all things AI. I using it in my work when it&rsquo;s useful, and in keeping with company policy. I use it on side projects even more. Generative AIs are down right useful – when used in the right context.</p>
<h2 id="using-ai-for-college-assignments">Using AI for College Assignments</h2>
<p>AI&rsquo;s were <em>not</em> created as a learning tool. It&rsquo;s not that they can&rsquo;t be useful in helping your learn. <a href="https://www.npr.org/2025/08/06/g-s1-81012/chatgpt-ai-college-students-chegg-study">Tools are coming out</a> that are hoping to use generative AI as study platforms. You can use AI to generate extra problem sets, flash cards, quizzes, and other tools for you to use as practice. But if AI is generating the answers for you, or writing your paper, than you are not doing the work. If you are not working, you are not learning.</p>
<p>Learning should be hard. <a href="https://sarahrosecav.substack.com/p/in-defense-of-friction">Learning friction is good</a>. If you are using AI to make learning easy, you&rsquo;re doing it wrong.</p>
<p>If you want to <em>learn</em> don&rsquo;t use AI. If you want to be <em>employable</em>, don&rsquo;t use AI for your course work. If you don&rsquo;t want to learn, please skip college: don&rsquo;t waste your money or your professor&rsquo;s time.</p>
<h3 id="learning-requires-practice">Learning Requires Practice</h3>
<p>When I give talks conferences about communications skills, I say practice is critical for improvement. I also tell them <em>never use</em> AI when practicing.</p>
<p>The analogy I use is that learning is like working out at the gym:</p>
<p>If I want to get stronger, I can go to a gym and lift weights. If I go on a regular basis, and put in sufficient effort, I&rsquo;ll get stronger. But if I decide that my goal is to get heavy things off the ground, I could use a forklift. That forklift will get <em>much</em> heavier stuff off the ground than I ever will. But I won&rsquo;t get stronger.</p>
<p>AI is like a forklift: it&rsquo;s powerful when used by a skillful operator, but it is not designed to make you smarter.</p>
<p>College assignments are a form of practice. They help you build new skills. Even if they feel useless, you need to do them yourself.</p>
<h3 id="their-work-isnt-that-good-anyway">Their Work Isn&rsquo;t That Good Anyway</h3>
<p>On the whole AIs write bad college essays. I don&rsquo;t care that their creators claim they do college level work – they don&rsquo;t.</p>
<p>Some of you <em>feel</em> like AI&rsquo;s produce good college essays. How sure are you that you know what good work looks like? Are you sure your professors agree?</p>
<p>You may also <em>believe</em> that no one notices when you hand in an AI-written or edited essay. Why do you assume because you don&rsquo;t get in trouble no one sees what happened?</p>
<p>My wife generally reports that AI generated papers handed in by her students usually fail to meet the assignment in basic ways, and therefore frequently don&rsquo;t get graded at all. Her colleagues tell me similar things. That generally means they get a zero.</p>
<p>As for if someone notices: just because you don&rsquo;t get busted doesn&rsquo;t mean they don&rsquo;t notice. Right now, in far too many colleges, there is little incentive for professors to punish you. Punishing cheaters often requires doing hours of miserable unpaid work. A zero, or other low grades, requires minimal work.</p>
<p>I know professors who admit they are letting students get away with AI generated work. They assume the professional struggles that come later will be punishment enough. In short, they expect me to punish you by not hiring you or firing you for incompetence.</p>
<p>This is bad for all of us. I blame the administrators who worry more about your current satisfaction than your long-term success.</p>
<h2 id="but-my-professors-use-it">But My Professors Use It!</h2>
<p>Students like to <a href="https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html">raise the argument that professors use AI</a>, therefore students should be allowed to use AI. This is a hollow argument, but let&rsquo;s spend some time with it anyway.</p>
<p>The core of this argument is that it&rsquo;s hypocritical to suggest using AI is bad in one context and acceptable in another. But in life context matters. Why <em>should</em> they be held to the same standard as you are? Their job is to support and guide your learning. You goal should be to learn.</p>
<p>There are lots of important things students benefit from doing that professionals rarely do.</p>
<p>A good Computer Science program will put you through a course on algorithms. In that course you should learn about sorting algorithms and one of your assignments should be to code one. I took that course, wrote that code, and have never written another sort from scratch.</p>
<p>In fact most of the actual algorithms you learn in that course most professional developers will never write. There are excellent libraries already written that do these basic operations really well. That doesn&rsquo;t mean writing sorting algorithms in college was a waste of my time – it was extremely useful! I learned a lot doing it.</p>
<p>Same with writing in English (or other human languages) – you need to be good at it. To get good you need to do it – a lot.</p>
<p>You need the practice. Your professors don&rsquo;t.</p>
<h3 id="letting-the-ai-do-the-work-makes-you-lazy">Letting the AI do the Work Makes You Lazy</h3>
<p>One of the reasons that your professors are right to prevent you from using AI, while potentially using it themselves, is that AI makes it easy to be lazy.</p>
<p>When I use an AI to write code I notice very quickly the temptation to be lazy about checking the output of the AI. I know I cannot trust the output, but it is hard to stay engaged in reviewing all the code an AI generates.</p>
<p>The only thing, really, that keeps me focused is that I am <em>extremely</em> price sensitive: I once spent 25¢ on a task with <a href="https://cline.bot/">Cline</a> (one of the better AI tools for developers). A whole QUARTER! Worse, the code looked good but wasn&rsquo;t actually good.</p>
<p>You have to read, and understand, everything you have an AI generate. That can take nearly as much effort as creating it in first place. But because you didn&rsquo;t create it, your brain isn&rsquo;t engaged with the details. That means you have to work harder to do the review.</p>
<h2 id="a-tiny-exception">A Tiny Exception</h2>
<p>There is a tiny exception to all this: Assignments designed to help you examine how an LLM works and what they do well.</p>
<p>These will be assignments where your professor tells you to use AI. That is the one and only time I think they can make sense in an academic setting – at least currently. One day that could change, but we are a long way from that.</p>
<h2 id="i-need-to-know-how-to-use-ai-for-my-job">I Need to Know How to Use AI for My Job</h2>
<p>Yes. The future of work, heck the present of work, involves people using AI to make us more efficient. But college is not a job training program – it&rsquo;s about learning how to learn. College should equip you with the skills to grow and adapt over time.</p>
<p>AIs are one type of tool you will need. There are lots of tools you will need to learn to use. To use any tool you also need to understand what good and bad results look like when you use it. That takes understanding of the fundamentals of what the tool is doing. College isn&rsquo;t the only way to learn fundamental skills, it isn&rsquo;t even the best way for everyone, but it&rsquo;s a great way for a lot of people <strong>if you do the work</strong>.</p>
<p>Leverage the skills you learn while learning in college to teach yourself how to use AI tools.</p>
<h2 id="remember-that-integrity-matters">Remember That Integrity Matters</h2>
<p>In the working world good managers don&rsquo;t have time to check every detail – they need to be able to trust you to do the right thing when no one is watching.</p>
<p>I need to be able to trust the members of my team. I need to know that they are following company policy without my reminding them. They need to be honest with me when there are challenges and they need support. When they make mistakes I need them to admit it so we can work together to rectify the situation.</p>
<p>If you cheat your way through college, why should I believe that you are going to have integrity in any of those conditions?</p>
<p>If I have determine that you used AI to short cut your college learning, why should I believe that you aren&rsquo;t going to use AI when it&rsquo;s a security or privacy risk? Why should I believe you can do the work yourself at all?</p>
<p>Please, take the time to do the hard work and show up knowing how to learn the tools that do not exist today.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiments with AI Coding</title>
      <link>https://spinningcode.org/2025/03/experiments-with-ai-coding/</link>
      <pubDate>Sat, 22 Mar 2025 00:00:00 +0000</pubDate>
       <guid isPermaLink="false">https://spinningcode.org/2025/03/experiments-with-ai-coding/</guid> 
      <description>&lt;p&gt;I recently spent some time generating some simple programs with AI coding tools. As a technical architect and senior developer, I am watching the emergence of AI coding tools carefully. I know that the robots are coming for the coding parts of my job. Watching this generation of robots is important to long-term understanding of the field.&lt;/p&gt;
&lt;p&gt;To be clear I have been experimenting with &lt;a href=&#34;https://github.com/features/copilot&#34;&gt;Github Copilot&lt;/a&gt; and &lt;a href=&#34;https://developer.salesforce.com/blogs/2024/09/introducing-agentforce-for-developers&#34;&gt;Salesforce’s Agentforce for Developers&lt;/a&gt; since they were released. Most of my earlier experiments were failures. The tools wrote trivial code terribly and simply couldn’t understand more complex requests. They made me slower, not faster, so I put them aside for a time. But they are evolving fast and were worth another look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently spent some time generating some simple programs with AI coding tools. As a technical architect and senior developer, I am watching the emergence of AI coding tools carefully. I know that the robots are coming for the coding parts of my job. Watching this generation of robots is important to long-term understanding of the field.</p>
<p>To be clear I have been experimenting with <a href="https://github.com/features/copilot">Github Copilot</a> and <a href="https://developer.salesforce.com/blogs/2024/09/introducing-agentforce-for-developers">Salesforce’s Agentforce for Developers</a> since they were released. Most of my earlier experiments were failures. The tools wrote trivial code terribly and simply couldn’t understand more complex requests. They made me slower, not faster, so I put them aside for a time. But they are evolving fast and were worth another look.</p>
<p><a href="/2019/12/on-being-self-taught/">I learn by doing.</a> So I set about creating a few tools that were on my mind to see if I could get good code out of the LLMs. Or at least if they could save me enough time to be worth the learning curve to get good at using them.</p>
<h2 id="project-1-simple-openwrt-status-call-out">Project 1: Simple OpenWRT Status Call Out</h2>
<p>I recently updated my home wifi router to run <a href="https://openwrt.org/">OpenWRT</a>. When I bought the router I selected one compatible with that OS. After realizing T-Link would never fix some security issues in the firmware I made the switch.</p>
<p>Our internet service from <a href="https://www.breezeline.com/">Breezeline</a> has always been spotty; sometimes bad enough I need to relocate to get stable service. One of the things I want to know when I’m working elsewhere is: has my internet come back up yet?</p>
<p>I have a programmable router, I’m a programmer, therefore this is a problem I could solve for myself.</p>
<p>But I’ve never written code to work with OpenWRT. Worse, I generally work in high-storage environments. I mostly do side projects in Python or NodeJS, but my router doesn’t have the storage needed for those run-times. OpenWRT recommends C or Lua for scripting. I’ve done my time in C. Jumping through all the hoops required to make C do something this simple did not feel like fun. So Lua it is. Too bad I hadn’t written code in Lua before…enter Copilot.</p>
<p>Throughout this project I used the current defaults – which meant GPT-4o was the selected LLM.</p>
<h3 id="initial-code">Initial Code</h3>
<p>I actually found Lua by asking Copilot. My first prompt was just:</p>
<blockquote>
<p>Generate a program for openwrt that sends a post request to an API endpoint at spinngingcode.org</p>
</blockquote>
<p>And it generated a reasonable looking Lua script. Since I don’t know Lua, or at that point why Copilot chose it, I dove into a little research into the language. After a few minutes I confirmed that, yes, this is the right use case for Lua. I also confirmed Copilot made reasonable choices in how the code was written.</p>
<p>I went on to ask Copilot to change call outs from curl to a library, add features to pull in the network status information from the router, and other adjustments. It consistently generated code that was close to correct. I did have to tell it to adjust details after validating them against my router’s actual setup and commands. It took some time but on the whole I was able to make steady progress. Eventually I had a good-enough solution.</p>
<h3 id="tests">Tests</h3>
<p>Automated test generation has been a promise of these tools since they arrived. It always feels like something that’s easy to do: read code, write test. Which is, of course, the 100% wrong way to write tests. However, the point of the exercise was to see what I could get the tools to do, so I gave it a try.</p>
<p>It took several tries for it generate valid tests. The first time I asked, it wanted to know what framework to use (a fair request), but generated a useless <code>hello_world</code> test file. On subsequent tries it switched frameworks without asking, and took several tries before generating something that looked valid.</p>
<p>Too bad it <em>wasn’t</em> valid.</p>
<p>The test framework I picked, <em>luatest</em>, was a bad choice. But with little experience in the language I didn’t know that. When I finally had it refactor to <em>busted</em>, I got tests I could setup and run…and they all failed cause they were still all wrong.</p>
<p>I spent more time getting a valid test setup than primary code – eventually I gave up. It would have been faster to do the research and figure it out from scratch.</p>
<h3 id="code-review">Code Review</h3>
<p>All in all, Copilot gave me a reasonable piece of code for the primary request. Using Copilot was far faster than I could have done the research to write something equally good. It still needs a lot of supervision to get it right, but the refactoring requests seemed to go okay.</p>
<p>The code commentary Copilot provided in the response is detailed and accurate, where I checked it. It’s chatty so I didn’t check every piece of commentary just every line of code. The code itself came with passable comments about what each block and function does. And functional decomposition was vastly improved over previous experiments I’ve done with Copilot.</p>
<p>It had to make up a few details, like the actually API endpoint name (it picked <code>https://spinningcode.org/api/endpoint</code> – okay sure reasonably good fake answer that is clear but also obviously wrong), but those were good enough for the limited details I gave it.</p>
<p>Outside of the automated test disaster, the biggest issue was that it did <em>nothing</em> to encourage secure design. Copilot recommended no security for that endpoint, nor does the commentary flag that it skipped security suggestions.</p>
<h2 id="project-2-simple-php-status-server">Project 2: Simple PHP Status Server</h2>
<p>Since I have PHP on my personal server, I decided that PHP was the right choice for the server side of the router project. Despite being a little rusty I am much more comfortable with PHP than Lua. I have standards and expectations about how to write and organize good PHP code.</p>
<p>The prompt I provided:</p>
<blockquote>
<p>I have a Lau script that looks like this:</p>
<p>[a copy of the then current version of the Lua file]</p>
<p>Now I need a PHP application that will listen to those calls, record the data into a CSV file, and includes a page that displays the last 100 lines of the csv file.</p>
</blockquote>
<p>It managed to generate valid PHP code that matched my request – sorta.</p>
<h3 id="code-review-1">Code Review</h3>
<p>The PHP it generated is not to my standards – not even close.</p>
<p>It lacked security checks, was poorly organized, and had borderline useless comments. If you <a href="https://github.com/acrosman/openwrt-status-message/tree/main/server">look at the repo</a> there are both a lot of generated refactors and then a lot of hand edits cause that’s web facing code, security matters, and I could fix it faster than debating with an AI.</p>
<p>I made some effort to get it to highlight the important information I wanted, but it struggled to regain the context from the OpenWRT requests that the Lua script was built around.</p>
<p>Unlike the Lua script, the PHP saved me little or no time. It was terrible in this context even with lots of attempts to refine the code. This piece of code I’ll probably maintain by hand.</p>
<h2 id="project-3-data-migration-scripts-for-salesforce">Project 3: Data Migration Scripts for Salesforce</h2>
<p>One of the hardest things to get people to fund sufficiently for a Salesforce project is the data migration. They are hard, boring (well not for everyone – I have undying respect for people who like these projects), and super time consuming. So we are always looking for ways to do them faster.</p>
<p>A few years ago I proposed a training exercise for data migrations that involved taking the data from my <a href="https://github.com/acrosman/sc_salary_data">SC Salary Data repo</a> and loading it into Salesforce. It’s a great training project because there is lots of data and it’s real-world messy. But it’s also simple: just two or three objects depending on the data model you choose. So I decided to test if I could use AI code to generate basic scripts to clean and load this data quickly.</p>
<p>For this project I made two changes to my approach. First, I switched the model to Claude 3.5. Second, I decided to treat the LLM like a junior developer and lead it through the project more intentionally. Part of this second change was driven by my initial experiences with the other projects. I assumed from the start that the code would be sub-standard, but that I could encourage Copilot to make changes to meet my expectations – just like I do when working with a human.</p>
<h3 id="code-review-2">Code Review</h3>
<p>This project was startling successful. The first stage was to have it create a <a href="https://github.com/acrosman/sc_salary_data/blob/master/create_sqlite3_database.py">script that prepped all the data into a simple SQLite3 database</a> (it does some clean up, but not all that’s possible with this data). The second stage was to have it create a script to load the data. That <a href="https://github.com/acrosman/sc_salary_sf_loader">second stage became it’s own repo</a> because I wanted to include a Salesforce SFDX project to setup the objects and fields I wanted correctly.</p>
<p>I generated probably 90+% of the code in <a href="https://github.com/acrosman/sc_salary_sf_loader">that repo</a> with the LLM. The Python to load the data, most of the Salesforce Metadata to create objects and fields needed, even the read me file I generated in large part with the LLM.</p>
<p>Frankly this is a project I’ve seen developers struggle to get right, and with a little guidance the LLM was able to do it. It probably needed as much guidance as a human would have (a very junior human developer), but the iteration cycles were much faster. I am not convinced when all is said and done the project was faster – it took a few hours to complete each stage – than my doing it solo. However, it required far less mental engagement from me than if I had written the code unassisted.</p>
<p>This project went well enough I’m using it to help drive a conversation about how to change our approach to migrations in general. There are major differences between what I did and I do in a migration of a database with thousands of tables targeting hundreds of objects with millions of records. But I was impressed enough to want to move on to fill those next gaps.</p>
<h2 id="conclusions">Conclusions</h2>
<p>LLM generated code is improving fast, but it is still generating low-quality code by default. Unlike experiments I ran a year ago, it now generates functional code that is equal to the code that a brand new developer could write.</p>
<p>Generated code is not to be trusted, particularly in production or public facing settings. If you don’t ask for security, you don’t get security. It organizes code as if it’s following train of thought, not in the order that well structured code should follow. And it does not write tests worth having without significant effort.</p>
<p>Good AI driven tools can make good developers faster. They are not good enough to replace a good developer – yet. Developers will need to learn to work these tools into the process, but we need to be aware of the gaps and how to close them.</p>
<p>I expect the quality also depends deeply on the quality of the code in public repos on the internet. I believe the terrible quality of the PHP I got vs the reasonably good Lua and Python is because there is lots of terrible PHP out there used to train these models. As long the LLMs write code that conforms to the average of the examples online communities will need to sweat the details of what is out there.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
