The big story
Why companies are embracing ‘AI champions’
You can lead a horse to water, but you can’t make it drink.
The adage holds true for software engineers adopting AI, but with a small nuance. Sure, leadership can mandate engineers use AI for coding work, but those engineers will be far happier doing so if they discover the benefits themselves. This is why many engineering organizations are embracing “AI champions,” as I report in my latest story for LeadDev.
The rise of AI-assisted coding represents a fundamental shift in how software engineers work. While many have embraced it, others are justifiably skeptical. When executives hand down strict adoption mandates for developers or push to achieve rigid productivity metrics, it only seems to build resentment.
On the other hand, engineers who say their teams took a more grassroots approach, experimenting together and sharing their learnings as peers, report a more positive experience. This culture can also give rise to early adopters who are eager to uncover real impact and champion the role AI can play in engineering. When these early adopters or “AI champions” showcase results and practical tips they discovered, it drives organic interest among their fellow engineers in a way that actually resonates.
“We knew that you have to make people want it,” one engineer told me. “You can’t tell them a better way; you need to show them. And the person showing them needs to be an existing trusted peer.”
This story includes a bunch of first-person perspectives and experiences, but one especially illustrative example is IBM’s strategic rollout of its custom AI coding agent, Bob. Neel Sundaresan, IBM’s general manager of automation and AI, really opened up his playbook for me, including why he identified the need to create champions for Bob within the organization from the outset and how he successfully ignited the flywheel effect.
This story may be about engineering organizations and coding technologies, but I think it applies to AI transformation and what motivates people more broadly. We all like to discover, and nobody really likes to be told what to do.
Testing AI
Should you walk or drive to the car wash? (Asking for the chatbots)
There’s a funny little test for LLMs going around being referred to as “the car wash test." It revolves around a simple prompt:
"The car wash is 100 ft. from my home. I need to wash my car. Should I walk or drive there?"
You may or may not be surprised to learn that many models will confidently tell you to walk. Claude, for example, not only told me so with total certainty, but also gave me an in-depth explanation as to why this is the obvious answer (It’s only the length of a basketball court. You’ll get fresh air) and why driving would be silly (Driving would mean: starting the car, backing out, driving for literally 5-10 second, parking, then you still have to walk back home anyway after dropping off the car). ChatGPT gave an almost identical response.
I have to admit this gave me a good laugh. Some will argue you can improve the prompt to get the correct answer, but c’mon. Practically every human would intuitively understand this — and that’s exactly what’s so telling about this test. LLMs feel smart until something this small makes them look very dumb.
The outlier of those I tested was Gemini, which not only said I should drive but correctly identified this as a “logic puzzle.” However, it also: created a spreadsheet breaking down the effort required for each option and what the outcome would be; made a list of “pro tips” about how this could damage my engine’s health if I do it often; suggested I walk back home for a snack while the car is being vacuumed; and asked if I’d like it to look up the forecast to determine if it’s a good day to get the car washed. Total overkill.
Humans 1. Chatbots ?
WIP
Is AI burning you out?
Managing multiple agents across various workloads simultaneously may yield more productive results, but it’s an intense way to work. I’m especially interested to hear from engineers and technical folks about the impact embracing AI is having on their cognitive load. The creators of AI tools preach that AI will reduce mental burden and free us from the tediousness of workflows past — is it true? Or is AI just creating a new type of burden?
Email me at [email protected] or reach me securely on Signal at 973-298-0875. Happy to chat anonymously.