<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title><![CDATA[Jake McCrary's articles on ai]]></title>
  <link href="https://jakemccrary.com/atom.xml" rel="self"/>
  <link href="https://jakemccrary.com/"/>
  <updated>2026-03-14T17:06:03+00:00</updated>
  <id>https://jakemccrary.com/</id>
  <author>
    <name><![CDATA[Jake McCrary]]></name>
  </author>
  <entry>
    <id>https://jakemccrary.com/blog/shipping-little-apps-anywhere-anytime/index.html</id>
    <link href="https://jakemccrary.com/blog/shipping-little-apps-anywhere-anytime/index.html"/>
    <title><![CDATA[Shipping little apps anywhere, anytime]]></title>
    <updated>2025-11-28T23:59:59+00:00</updated>
    <content type="html"><![CDATA[<div><p>I&apos;m a big fan of making <a href="/blog/2020/10/03/go-create-silly-small-programs/">small (sometimes silly) programs</a>. As a software developer, you have a superpower: you can identify problems in your life and fix them by creating some specific software that solves for exactly what you need. When scoped small enough, creating these tiny programs takes minimal time investment.</p><p>When you develop the practice of recognizing when a bit of software would be helpful, you see opportunities all the time. But you don&apos;t control when you get inspiration for these programs. So you come up with strategies for handling these bursts of inspiration.</p><p>One strategy: Write yourself a note (paper, email to yourself, some app on your phone) and maybe get around to it later. (You occasionally manage to get around to it later.) Another strategy: Think about the inspiration and trick yourself into thinking you&apos;ll remember it later when you&apos;re at a computer. You justify this by claiming if you forget it, it must not have been important.</p><p>These workflows are fine but they leave a lot of room for never following up. With modern AI tools, we can do better.</p><p>My new strategy:</p><ol><li>Inspiration strikes!</li><li>I pull out my phone and open my web browser to <a href="https://openai.com/codex/">OpenAI&apos;s Codex web app</a>.</li><li>I translate my inspiration into a task and type (or voice-to-text) it into Codex.</li><li>I submit the task to Codex, go about my day, and check on it later.</li><li>Later: read the diff, click the Codex button to open a PR, merge the PR through GitHub&apos;s mobile interface, and let GitHub Actions deploy the changes to GitHub Pages.</li></ol><p>I started using this technique in early summer 2025. Since then, I&apos;ve been able to develop and iterate on a handful of single-page web applications this way. As models improve, it is getting even easier to knock them out. It works well for either making a new application or tweaking an existing one.</p><p>Here is my setup:</p><ul><li>I have a single repo named <a href="https://github.com/jakemcc/experiments">experiments</a><a href="#fn-1" id="fnref1"><sup>1</sup></a> on GitHub.</li><li>This repo has a subdirectory per application.</li><li>The applications are in a variety of web languages (HTML, CSS, TypeScript, JavaScript, ClojureScript).</li><li>OpenAI Codex is linked with this experiments repo.</li></ul><p>With this setup, I&apos;m able to follow the above strategy with minimal friction. If I have an idea for a new little application, I open Codex and provide a description of what I want and what it should be called, and it usually manages to start work on it. When I have an idea for tweaking an application, I open Codex, tell it what subdirectory the app is in and what tweak I want made. All of this can be done from a smartphone.</p><p>When Codex is done, I do a quick scan through the diff, click the buttons to open a PR, merge it, wait for the deploy, and then check on the deployed artifacts. The apps end up published at <a href="https://jake.in/experiments">jake.in/experiments</a>.</p><p>It isn&apos;t all smooth; sometimes a problem is introduced. Depending on the problem, I&apos;ll either revert the code and try again or give Codex more instructions and try to have it fix it. If really needed, I&apos;ll fire up my laptop and fix it myself or iterate with AI on fixing the problem there.</p><p>The bar has been seriously lowered for creating specific software. Go do it. It is fun, but in a different way than traditional programming.</p><ol class="footnotes"><li class="footnote" id="fn-1"><p>I don&apos;t know if this limitation still exists, but when I was initially setting this up my experiments repo had zero commits. This caused problems in Codex that were fixed by adding a single commit.<a href="#fnref1">↩</a></p></li></ol></div>]]></content>
  </entry>
  <entry>
    <id>https://jakemccrary.com/blog/humans-ask-computers-propose-humans-decide/index.html</id>
    <link href="https://jakemccrary.com/blog/humans-ask-computers-propose-humans-decide/index.html"/>
    <title><![CDATA[Humans ask, computers propose, humans decide]]></title>
    <updated>2025-08-17T23:59:59+00:00</updated>
    <content type="html"><![CDATA[<div><p><strong>Warning: There are minor spoilers of parts of <em>A Deepness in the Sky</em> in this article.</strong></p><p>I was reading Vernor Vinge&apos;s <a href="https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky"><em>A Deepness in the Sky</em></a> when a paragraph made me think of today&apos;s AI tools.</p><p>In <em>A Deepness in the Sky</em>, one of the groups of humans, the Emergents, has figured out how to take advantage of a &quot;mindrot&quot; virus that was a plague on their homeland. Once a person is infected, the Emergents are able to manipulate the mindrot to force an obsession. This practically turns the infected person, colloquially called a ziphead, into a specialized appliance focused on their obsession and little else.</p><p>In the following paragraph, one of the Emergents talks about how they use a subset of the zipheads to enhance their ship&apos;s computer:</p><blockquote><p>They left the group room and started back down the central tower. “See, Pham, you—all you Qeng Ho—grew up wearing blinders. You just know certain things are impossible. I see the clichés in your literature: ‘Garbage input means garbage output’; ‘The trouble with automation is that it does exactly what you ask it’; ‘Automation can never be truly creative.’ Humankind has accepted such claims for thousands of years. But we Emergents have disproved them! With ziphead support, I can get correct performance from ambiguous inputs. I can get effective natural language translation. I can get human-quality judgment as part of the automation!”</p></blockquote><p>The zipheads see the requests made by the users of the ship, apply their human judgment to the request, and then work with the computer to fulfill their interpretation of what the user is requesting. This allows the Emergents to make ambiguous requests to their ship&apos;s computer, requests a human would understand but a computer could not, and get back quality results. There are literally humans-in-the-loop of the Emergents&apos; computer system.</p><p>This paragraph made me think about how I use the current crop of AI tools and how it&apos;s changed how I interact with computers. I can now open an app and poorly specify what I want (don&apos;t fix typos, don&apos;t bother with full sentences, be vague) and the computer still often manages to perform the task or find the information I&apos;m asking about.</p><p>I can underspecify what I&apos;m looking for and get approximately a &quot;human-quality&quot;<a href="#fn-1" id="fnref1"><sup>1</sup></a> fulfillment of that request. Not only that, but the AI response often comes back and asks about follow-up steps and offers to perform them. And all without enslaving other humans with a virus and attaching them to the computer.</p><p>This is amazing. Does it work 100% of the time? No. But wow, it works enough of the time to be a big game changer.</p><p>Here are three examples of varying degrees of specification while working with an AI:</p><h3 id="typos-barely-matter">Typos barely matter</h3><p>I rarely correct typos anymore when searching on Google or asking an AI for help with something. The computer doesn&apos;t care and still mostly does the right thing. To be fair, this has been gradually happening over time with Google&apos;s ability to make sense of garbage searches, but modern AI tools have drastically accelerated it.</p><h3 id="time-series-triage-with-o3">Time series triage with <code>o3</code></h3><p>Earlier this year, I threw a CSV of memory stats and usage metrics for a ton of JVM processes my team manages at OpenAI&apos;s <code>o3</code> model. I mentioned three services that ran out of memory on specific dates and asked it to identify other services that might be approaching memory problems. <code>o3</code> identified a pattern in the data that correlated with running out of memory and flagged a few other processes that might be approaching a problem. I looked at more metrics, agreed with <code>o3</code>, and then changed some memory settings to avoid future problems.</p><h3 id="config-migrations">Config migrations</h3><p>I recently needed to change a couple of values in about 45 config files. This wasn&apos;t something a simple <code>sed</code> could do because each file needed a unique value derived from another service&apos;s config. I provided a command to run that would tell the AI agent (or me, if I were doing this by hand) if the config values were correct and told it to run the command and fix the problems. I didn&apos;t specify what to change; it figured it out while I worked with some coworkers on figuring out a bug. Once I wrapped up with my coworkers, I reviewed the changes, agreed with them, and moved on to the next task.</p><h2 id="asks-➡-🤖-proposes-➡-🙂-decides">🙂 Asks ➡ 🤖 Proposes ➡ 🙂 Decides</h2><p>We used to have to specify specific instructions to a computer through programming. Now we can specify desires/outcomes with natural language and often the computer will do a decent job of achieving our goal or at least getting us to a reasonable starting point to take over.</p><p>The computer takes our ambiguous input, proposes a solution, and then we&apos;re able to step in and accept, change, and modify the results. This is an exciting change and thankfully we&apos;re able to achieve it without turning other humans into infected appliances.</p><pre><code class="language-bash">┌─────────────┐     ┌────────────────┐     ┌─────────────┐
│  Humans ask │ ──▶ │ Computer       │ ──▶ │ Humans      │
└─────────────┘     │  proposes      │     │   decide    │
      ▲             └────────────────┘     └─────────────┘
      │                                               │
      └───────────────────────────────────────────────┘
</code></pre><p>This feels like Vinge&apos;s ziphead-supported computer but we&apos;ve replaced infected humans with AI models<a href="#fn-2" id="fnref2"><sup>2</sup></a>.</p><ol class="footnotes"><li class="footnote" id="fn-1"><p>For some definition of human-quality<a href="#fnref1">↩</a></p></li><li class="footnote" id="fn-2"><p>How much longer until the non-infected humans are also replaced? Hopefully we&apos;re able to avoid a future dystopia.<a href="#fnref2">↩</a></p></li></ol></div>]]></content>
  </entry>
</feed>
