The Self-Healing Code Fallacy

Remarks about a new CI step that makes your code fix itself before its merged

Created on [2023-04-05], last updated [2023-04-16]

Yesterday, a colleague of mine shared a tweet in our #random Slack channel about a new AI-powered plugin for GitLab CI that supposedly makes your code self-healing.

Here’s the tweet from Min Choi:

Say goodbye to manual code fixes! With self-healing code @GitLab CI pipeline, powered by @LangChainAI and @OpenAI , you can automatically detect and fix issues in your code. This is the future of DevOps!

Building on the GitHub action pipeline created by @xpluscal

He goes on to explain all the nitty-gritty details of how it works:

First, a code push triggers the GitLab CI pipeline where Build job fails due to code error, which trigger the SelfHeal job.

Using combination of LLMChain prompt and GPT API query, SelfHeal job fixes the code, commits, then push

Second, the code push from SelfHeal job triggers a new pipeline with the fixed code.

This time Build job passes. SelfHeal job is skipped since Build job passed.

Entire processes was fully automated after the initial code push. Possibilities are endless in this age of AI. I will keep you guys posted as updates are made.

AI Naivety

This hits me as classic case of AI naivety. People see LLMs do things that they didn’t think were possible, and do it fast. That makes them think that LLMs can possibly do anything at all. They forget–that some things remain impossible. It’s as if they were introduced to some deeper truth that suddenly invalidates the constraints of the old world. In the new world, AI can decide the halting problem. Just give it a program and it’ll tell you! It seems to work well for these 10 lines Python snippets that we tested it on, doesn’t it? And fuck it, who’s to say if it gets it wrong.

A Bad Metaphor

Back to the self-healing code from Min Choi’s tweet, the biggest problem I have with this idea is the premise that fixing code is somewhat similar to how living organisms heal. Healing is when your body restores a previous “good” state after enduring some trauma. If your code would self-heal whenever you broke it, that’d just mean it’s reverting your changes time after time. You’d be stuck with a self-preserving, “Hello, World!”-printing, CI-passing piece of nothing. To fix code you must start from a rigorous specification of what it means for the code to be fixed. Min Choi’s self-healing code needs to first read the developer’s mind in order to figure out what it should heal into, because the code ultimately conveys the developer’s intent. No problem for the almighty AI, right?

Contrast this wishful-thinking, eye-shutting, based approach with something like PUMPKIN PATCH from Talia Ringer et al., where automatic proof repair (a much more suitable term then “self-healing code”) is done in the setting of a formal specification of what the program must do (or rather, what it must be–what type it should inhabit).

Conclusion

LLMs are cool, they really are, but they can’t read your mind, and they can’t make your code self-heal, whatever that means. Do understand their function and utility so you can leverage them where appropriate, but don’t expect them to do the impossible.