Jason made me do it (meet the Optimise to Innovate podcast)

Three and a half years is a long time to not do something you enjoy. In my defence, I had good intentions every single one of those years! [In hindsight, I may have been operating on an intentions-heavy, outcomes-light model.]

Anyway, Jason Gray and I have started a podcast! It’s called Optimise to Innovate and it’s out now…

What’s it about?

I often have conversations with potential customers that they’ve invested internally in the cloud, or an AI platform, or the shiny new thing, and six months later it’s costing more than expected, people haven’t adopted it properly, and the team that was supposed to be transformed is still clinging on by the fingernails to their legacy solutions. The technology didn’t fail exactly, it just didn’t meet expectations, or more often, there were no accurate expectations set up front!

That gap between “we invested in this” and “this is working and adopted widely” is what Jason and I wanted to dig into with the podcast. Together with guests who’ve been in the weeds on solving these problems, we’re covering cloud investment, AI adoption, FinOps, software asset management, and all the awkward middle bits that nobody puts in a press release.

What’s out so far?

We’ve published three episodes:

Episode 1: Making Workplace AI Work for You
Our first proper episode, with Tomi Karafilov joining us to talk about workplace AI. Not the breathless “AI will change everything” conversation, but the practical reality of implementing it and getting people to actually use it. My favourite quote from Tomi was “the brain in front of the computer is you”, which is something we often forget when we expect a new toy to do everything.

Episode 2: FinOps – You Can’t Tool Your Way Out of Bad Cloud Habits
We brought in FinOps experts Parker Nancollas and Anthony Thurston, to talk about the people side of cloud cost control. They really drove home the message that, FinOps has little to nothing to do with tooling, and everything to do with culture and habits. If your cloud bill has ever made you wince , this one might give you a few handy tips!

Episode 3: Agentic AI Is Not Really About Agents
Out yesterday! Alex Waldhaus and Seva Shchepanskyi joined us to cut through the noise around agentic AI. Their central argument was on point – all the talk and hype about agents is a bit of a distraction. What you’re actually doing is redesigning business processes, and all the usual rules still apply. Get your data in order first, layer in automation gradually, keep humans in the loop, design for when things go wrong, not just for when they go right, etc [Radical concepts, I know!]

My favourite line of the episode came from Alex Waldhaus: “The real risk is not that agentic AI fails. The real risk is that companies deploy it like magic instead of engineering it like a system.” I’m totally stealing that one… [Go listen.]

Optimise to Innovate Podcast Logo

Where to find us

The podcast is available on all the usual platforms. Search for “Optimise to Innovate” wherever you get your podcasts, or head straight to optimisetoinnovate.buzzsprout.com for the links to your favourite platforms.

If you’ve got a topic you’d like us to cover, or you want to come on as a guest, drop us an email at [email protected]. We’re always looking for interesting people with strong opinions and real experience.

And if you enjoy it, hit subscribe. It legitimately helps more than you’d think! ๐ŸŽ™๏ธ

Podcasting , , , , , , ,

Should I Recommend My Daughter Becomes a Developer?

My eldest daughter is in her mid-teens, and I honestly don’t envy her, she’s trying to tackle the big question of what to do for the rest of her life, which is a question most adults still wrestle with! She is very strong in STEM but also super creative. She’s bouncing back and forth between multiple ideas including biology and music, but she says her ideal role would probably be a developer. If this was 20 years ago, I would have said AWESOME [#prouddad]! Now, I’m sadly not so sure…

In my 20+ years of being an IT greybeard, I’ve watched whole job categories appear, mature, and then blur into something else (albeit often just a rebrand!). So when my daughter says developer, I don’t hear a stable destination, I hear a role changing at a such high speed it’s making the current generation of devs heads spin!

Skills that survive change

The most interesting part of my career hasn’t been any specific technology (despite what some might think!), it’s been architectural thinking. The ability to take something vague and fuzzy, break it apart, work out how the pieces connect, and describe what “done” looks like clearly enough that someone can actually go and build it. I’ve done that for “human” teams for years, but it turns out AI needs exactly the same thing, just faster and with even less patience for ambiguity [which, if you’ve ever worked with developers, is really saying something ๐Ÿ˜].

The numbers are genuinely becoming bonkers… Cursor generates around $16 million in revenue per employee, Midjourney hit $200 million with 40 people, and tonnes of organisations now have three engineers shipping what would’ve taken a team of ten, eighteen months ago. I won’t even guess how much Peter Steinberger got paid by OpenAI!

I think the interesting bit here is that the cost of writing code is collapsing, but the cost of being unclear about what you want built is going up! It’s the age old saying about kak in, kak out… AI speeds up code generation, even with vague requirements (unless of course, you’re reverse prompting!). As any developer who’s ever sat through an unclear brief will tell you, without that clarity of requirements, we will just build wrong code faster. Indeed, AWS seems to have invented KIRO to solve this exact problem…

Crystal balls…

Nate B Jones says there’s an uncomfortable split emerging. One group is learning to work with AI as a legit force multiplier, holding entire systems in their heads, defining outcomes clearly enough that agents can execute them, and actually checking whether what came back solved the right problem (and didn’t introduce a massive security hole in the process!). The other group is using AI as a faster version of what they were already doing, which he says sounds fine until you realise that’s also the work AI handles first when companies start “optimising” their teams. Entry-level developer postings are down, depending on who you ask this may be as much as two-thirds, and the junior pipeline that most people assumed would always exist is visibly narrowing. It’s worth noting that Matt Garman says the opposite!

His video on the topic was the other trigger for this post and is worthwhile taking the time to watch.

So what should we tell someone starting out? I would still encourage young people to go into tech, but think carefully about which skills they’re building. Understanding how systems work, how to define requirements precisely enough to actually be useful, how to translate from IT to human, how to ask the right questions before anyone opens an IDE… you don’t need the architect job title to think like one!

Part of me really wants to tell my daughter to go for it, because she’d be brilliant at it! But I can’t tell her in good conscience that aiming for a “traditional” developer role is a safe long-term bet. I’d rather she trained in the kind of thinking that travels well regardless of where the IT industry ends up (or indeed whichever industry she ends up working in). I know she’ll be fine whatever she picks… I just wish I could give her a better answer than “it depends”.

AI, Architecture, Career , , , ,

BookBytes #1: Books Worth Your Time

I’ve been meaning to do this for a while [having had the idea back during Covid times, when we were all trying to find ways to stay healthy and sane]. Life however [as has been famously quoted], moves pretty fast…

Anyway, I read quite a lot and a listen to a lot of audiobooks (which helps pass the time when working on the house renovations you promised your better half would be finished in the first two years… *cough*). Every month or so, I’m going to pick three books that I think are worth your time and tell you why. Some of them will be relevant to whatever you do professionally, some of them will just be genuinely excellent and I want more people to have read them, so I have someone to talk to about them!

Let’s give it a go…


The Coming Wave by Mustafa Suleyman

The Coming Wave by Mustafa Suleyman

I’ve read a lot of books about technology and AI over the past few years and the vast majority of them follow a fairly predictable structure. Exciting technology, amazing potential, a few paragraphs acknowledging that yes there are some concerns, and then more amazing and this time transformative potential. The Coming Wave doesn’t do that, and I think it’s because Suleyman isn’t a technology commentator who got excited about AI from the outside. He co-founded DeepMind, ran the AI products team at Google, and founded Inflection AI. He isn’t observing this stuff from a distance, he actually helped to invent it and build it.

[In fact last week I also finished “Supremacy” by Parmy Olson, which goes into great deal about his background, and is also a very insightful mini-history of the formation of modern AI. Anyway, back to Waving…]

What makes the book worth reading is that he’s genuinely conflicted about what he helped create. Not in the “we should have conversations about ethics [so I can sell more books]” way you see in a lot of tech writing, but in a much more specific and uncomfortable way. His view is essentially that the containment problem (working out how you prevent these technologies from being used in catastrophic ways, aka the Skynet problem) may not be solvable, and that we’re running out of time to find out. He’s not telling us to panic, but he is telling us that the people who should be solving this aren’t moving fast enough, and he ought to know.

My Goodreads note when I finished it was “slightly disturbing even for someone who works in this space”. For my non-UK friends and colleagues, that’s the British version of the word slightly.

If you’re in any kind of technology leadership role, or perhaps just someone with a pulse, you should read it… tomorrow.


Smart Brevity by Jim VandeHei, Mike Allen and Roy Schwartz

Smart Brevity by Jim VandeHei

As someone who has been known to occasionally use slightly too many syllables (see definition of “slightly” above), I’ll admit I was a bit sceptical when this was recommended to me. A book about writing shorter things, from the founders of Axios, a publication which is essentially famous for being short. It seemed a bit self-referential, and I was half-expecting 200 pages of fairly obvious advice dressed up in a lot of white space.

I was wrong, which I know because I went back into Goodreads and changed my rating from three stars to five after I’d been using the techniques for a couple of weeks. That doesn’t happen often!

The core idea is straightforward enough; most professional writing is structured backwards, assumes too much patience from the reader, and buries the thing that actually matters somewhere in the middle. The fix isn’t writing better sentences, it’s changing the structure and style entirely. Lead with what matters, make the “why should I care” explicit, and then let people choose how much further they want to go.

Once you start doing this with emails and presentations, you also start noticing how much of what other people send you is padding. Adopting this method saves time for everyone! I’m not saying we should use this for everything (indeed I’m not), in part as it can be a touch dry at times, but it’s certainly worth a go.

If you want easy mode, you can even take a draft email you wrote and ask your favourite LLM to “rewrite this mail using the Smart Brevity framework from Axios (and the book Smart Brevity by Jim VandeHei et al)”. You’ll be amazed at the difference, indeed you might even want to turn that into a Copilot agent… #justsayin


The Lies of Locke Lamora by Scott Lynch

The Lies of Locke Lamora by Scott Lynch

… and now for something completely different.

My #1 favourite author of all time is the amazing, inimitable and [embarras yourself by laughing out loud on the bus in front of strangers]ly hilarious Sir Terry Pratchett. Sadly, Sir Terry is no longer with us, and if you havent read any of his books, just start with “Guards, Guards” and thank me later! Since his passing I’ve been sampling various authors to try to capture a touch of that genious. The Lies of Locke Lamora is probably the closest thing without actually being remotely copy-cat. It’s simply brilliant and I want more people to have read it!

Locke Lamora is a con artist living in a city that feels like Renaissance Venice if Venice had slightly more casual poisoning, and a thieves’ guild with genuinely baffling rules about which parts of the nobility you’re allowed to actually steal from. He and his crew run increasingly elaborate and inadvisable schemes against the city’s upper class, which works out reasonably well, until it very much doesn’t.

I don’t want to say too much about the plot because there’s a moment roughly around page 126, that I didn’t see coming at all, and being told about it in advance would genuinely ruin it! ๐Ÿ˜‰ What I will say is that the dialogue is sharp, Father Chains is now one of my all time favourite characters from any book, and the whole thing has an energy that keeps you reading when you should probably be sleeping.

I’ve read all of the brief series to date and am now [im]patiently waiting for the next one, which has been delayed by several years already! If you have Scott Lynch’s contact details, please send them over.


BookBytes goes out [if the moon phases are fully aligned] on the first or second Tuesday of each month, space-time continuum permitting. Recommendations, abuse, and “you missed an obvious one” notes are all welcome.

AI, BookBytes, Books , , , , , ,

Half a Second Saved the Internet

One of my favourite activities on a relaxed weekend morning is watching a couple of Veritasium and B1M videos. Last week Veritasium published a superb video on a [not so] simple Linux exploit, that could have had HUGE ramifications. If you haven’t seen it, go watch it, I’ll wait. It’s actually one of the most fascinating, yet little known stories in recent tech history, and it sits right at the intersection of many of the things that interest me; open source, trust, the humans behind the software, and just how fragile much of it really is.

A Lone Maintainer

If you CBA watching the video (seriously though, you really should, just put it on 1.5x!), here’s the short version. A lone developer called Lasse Collin maintained a compression library called “XZ Utils”, quietly and unpaid, for roughly twenty years. You’ve almost certainly never heard of it, which is basically the point. It sits underneath an enormous amount of critical infrastructure, including (most importantly) SSH, that millions of servers rely on every single day. Nobody thinks about it, nobody talks about it, and for most of its life exactly one person was keeping the lights on…

Lasse, who was already burned out and struggling with his mental health, was being hounded by “accounts” to make more progress on the project, some messages encouraging him to accept help and hand over responsibility to other devs. Then someone calling themselves “Jia Tan” showed up, which we now know was almost certainly a nation state operation, and spent two and a half years patiently social engineering their way into becoming a trusted maintainer of the project. They were helpful, responsive, wrote good code, etc. All seemed peachy! Enter Rich Jones, who works at RedHat, packaging Fedora. He began to trust Jia because… well, because Jia behaved exactly like the kind of person open source desperately needs. That’s what makes social engineering so effective; the good behaviour is the attack.

A Backdoor to the Internet

The backdoor they slipped in was technically brilliant and horrifying in equal measure, a hidden compromise buried in the compression library that would have given someone a backdoor to a significant chunk of the world’s servers if it had made it into stable releases across major Linux distros. The whole thing eventually unravelled because a single Microsoft developer called Andres Freund noticed that his SSH logins were taking half a second longer than they should and decided to dig into why. Five hundred milliseconds stood between us and a catastrophic supply chain compromise, and one curious engineer is the reason we caught it.

Open Source Matters

I’m pro open source, always have been. I’ve been using Linux since the 90s, which probably gives you an idea of what colour my beard is. The concept of “Software should be free and we’ll prove it works” is unarguably one of the greatest ever human endeavours. When geopolitical tensions seem to ratchet up weekly, where we’re supposedly retreating into blocs and borders, the fact that open source still works is genuinely remarkable and something to make you proud of bring an ape descendant! It’s proof that like-minded humans can can continue to collaborate on a global scale.

But here’s where we have a challenge… Linus’s Law says that given enough eyeballs, all bugs are shallow. That’s true for projects that actually have enough eyeballs, but popularity does not equal scrutiny. XZ Utils had millions (perhaps billions) of installs, but for most of its life basically one person was reading the code, and then there were two, and the second one was the attacker. (Yes, I know, hyperbole, but you get the point!). Downloads are not eyeballs, installs are not audits, and I think we’ve been confusing usage with oversight for a long time. The XZ story is a great example of where it went both brilliantly right and very wrong.

The real vulnerability here wasn’t technical; it was a system that let a single unpaid maintainer carry critical infrastructure on his back for two decades without meaningful support. We collectively built our digital world on top of someone’s volunteer labour and then acted surprised when that became an attack surface!

Will AI Make This Better or Worse?

Which brings me to the point of this article.

Can AI help with this?

[… and yes, it’s AI again! You now have permission to roll your eyes about having read YAAIA, aka yet another AI article]

In theory, AI-powered code review could potentially spot the kind of obfuscated changes that slipped through here. Automated analysis tooling that never gets tired, never gets burned out, never feels social pressure to approve a commit because the submitter has been so helpful lately. That sounds very promising, but there’s another side to it. As more code gets written by AI and reviewed by AI, do we end up with even fewer human eyeballs on critical paths? Do we create a new kind of “enough eyeballs” fallacy where the eyeballs are all artificial and share the same blind spots?

I genuinely don’t know the answer, and I suspect the truth is that AI will simultaneously make some attacks harder and others easier [really helpful insight, I know!].

Interesting times in Security

Interesting Times

What I do know is that the XZ story deserves to be told widely, especially within our industry. Absolutely not as a cautionary tale about open source, because closed source has its own risks and horror stories that just happen behind closed doors.

To me, it’s a reminder that the humans behind the code matter as much as the code itself. Fund the hard working maintainers, buy them a ko-fi, support the people doing unglamorous work, and maybe, occasionally, investigate when something takes half a second longer than it should.

As my favourite author of all time, Sir Terry Pratchett reminded us, “may you live in interesting times”. As the world tries to keep up with supply chain security, I can confirm interesting times are in the current sprint…

PS – Amusingly as I was writing this, another article on a similar topic hit the headlines, about FOSS repos. Worth a few minutes of your time too I reckon.

Linux, Security , , , , , , ,