I heard there's trouble coming up with a good symbol for AI apps. Let me save you some money. A pictogram of human brain with a little rubber hose legs on the bottom, hands on the side. White gloves on them, no doubt. Slap a pair of oversized eyes in the middle. Long lashes are optional if you’re going for a feminine representation.
Our brains are nuts. I can understand my 11 yo kid being hesitant to play the game of having Amazon Rufus admit it lied about the country of origin of a product. You don’t want to hurt the bot’s feelings. Kids are raised watching cartoons of talking dogs, talking cats, talking plants, talking kitchen appliances. Empathy is shoved down their throats on every occasion just to be taken advantage of later on. What I can’t understand is developers using the sophisticated ML models at our disposal to simulate a behavior of a software developer. Worse yet, simulate a group of software developers talking to each other!
I know letting go of control is hard. I know that for every new scary unknown we face, we have to find a parallel to something familiar. Sure, those metaphors might diminish our potential to get the most out of that new thing. But it is a small price to pay for not risking loosing our minds in a lovecraftian slip into madness when faced with a terror of a completely alien and unrelatable Thing.
That is the only explanation I have for this whole trend of designing your own Agentic AI Coding Team. It’s been forever since the more progressive fraction of software engineers tried to disprove metaphors from construction or manufacturing industry. Software is NOT like building a house, software is NOT like a production line. We know that. Except we don’t. And now we have that new thing. The AI. A mfing Cthulhu at our doorstep. Something that seemingly smart people predict is the final solution to the problem of software developers. But get that, the metaphors, the models, whatever you call it should only be judged by their usefulness. The metaphor of “AI as a virtual team” is just not useful. We have to realize we have something way more superior at our disposal. We have a math equation that can produce software. This is not a metaphor. It is literally what a large language model is and it is important to realize that.
I would very much prefer, if a lay person imagined a software developer of the future, that they rather see Johnny Mnemonic or John Anderton from Minority Report with their cybergloves and their headsets crunching at 100x speed rather than a person ordering around a legion of dumb robots, the Star Wars or the I, Robot style. We have at our hands a device of dreams. A function that takes a loooooong sequence of numbers as input and spits our a loooooong sequence of numbers as output. Input can represent our intent as engineer, output can represent executable code. We can finally produce systems that are both complex and correct. Something no human will ever be capable of.
What do we do instead? We play Sims with our AI. We turn models to virtual people adding so much unnecessary indirection into the process of turning our thoughts to software. I get why it might be compelling for managers or non-technical folks. But engineers? We could do so much better. I always used to hypothesize “if I only had a spare year to work on that, I could make this system so perfect”. Never in my life have I wondered “if I had a team of X people to do that”. Now I have at my disposal what amounts to a spare century of software development. Forever. You have that too. Why would you want to give away your control of the process? Why would you want to build a black box inside of which robots talk to robots to figure out what you really want?
Or maybe it is not our psychology? Perhaps it is the marketing around AI that is causing this. Coders are the earliest adopters of general purpose LLMs. As long as the company pays for the subscription you don’t really care if you got your perfect code in one exchange of messages or if an equivalent of an NMA Annual Convention of your bots took place behind the scenes. But certainly the latter would make your provider way more money. Should we even care? About the money? Not really. About the process? Definitely. Reasons behind decisions are important and the more steps are involved to arrive at a solution - the code - the harder it is for us to understand what happened. Even if everything is transparent.
Giant white box is no different than a black box.
Interpretability of neural networks with more than a few hidden layers is already problematic. If we create loops with models feeding models, we pretty much give up. Giving up the ability to understand motives for changes is a nuclear option. It should only be exercised if the reward is extremely compelling. In almost all cases it is not.
It feels nice not to have to be in the details. To outsource not only the grunt work but also the thinking about small things. So we can only think big. Knowing the small things will be discussed ad nauseam in a language that I can, but don’t have to understand. But that’s a pipe dream. Small things inform big things and oftentimes there’s no clear boundary between the two. By giving away that kind of control we’re becoming less smart, and what we produce, even with all the help is less coherent, more brittle and significantly harder for further AI iterations to make sense of.
Do me a favor. For the next few hours stop treating your AI assistant (whether it’s coding or writing documents) as your minion. Start thinking about it as an equation and work really hard on its inputs to be high in signal and low on noise. Observe if it makes a difference. Share what you’ve learned.
Thank U.