BookBytes #1: Books Worth Your Time

I’ve been meaning to do this for a while [having had the idea back during Covid times, when we were all trying to find ways to stay healthy and sane]. Life however [as has been famously quoted], moves pretty fast…

Anyway, I read quite a lot and a listen to a lot of audiobooks (which helps pass the time when working on the house renovations you promised your better half would be finished in the first two years… *cough*). Every month or so, I’m going to pick three books that I think are worth your time and tell you why. Some of them will be relevant to whatever you do professionally, some of them will just be genuinely excellent and I want more people to have read them, so I have someone to talk to about them!

Let’s give it a go…


The Coming Wave by Mustafa Suleyman

The Coming Wave by Mustafa Suleyman

I’ve read a lot of books about technology and AI over the past few years and the vast majority of them follow a fairly predictable structure. Exciting technology, amazing potential, a few paragraphs acknowledging that yes there are some concerns, and then more amazing and this time transformative potential. The Coming Wave doesn’t do that, and I think it’s because Suleyman isn’t a technology commentator who got excited about AI from the outside. He co-founded DeepMind, ran the AI products team at Google, and founded Inflection AI. He isn’t observing this stuff from a distance, he actually helped to invent it and build it.

[In fact last week I also finished “Supremacy” by Parmy Olson, which goes into great deal about his background, and is also a very insightful mini-history of the formation of modern AI. Anyway, back to Waving…]

What makes the book worth reading is that he’s genuinely conflicted about what he helped create. Not in the “we should have conversations about ethics [so I can sell more books]” way you see in a lot of tech writing, but in a much more specific and uncomfortable way. His view is essentially that the containment problem (working out how you prevent these technologies from being used in catastrophic ways, aka the Skynet problem) may not be solvable, and that we’re running out of time to find out. He’s not telling us to panic, but he is telling us that the people who should be solving this aren’t moving fast enough, and he ought to know.

My Goodreads note when I finished it was “slightly disturbing even for someone who works in this space”. For my non-UK friends and colleagues, that’s the British version of the word slightly.

If you’re in any kind of technology leadership role, or perhaps just someone with a pulse, you should read it… tomorrow.


Smart Brevity by Jim VandeHei, Mike Allen and Roy Schwartz

Smart Brevity by Jim VandeHei

As someone who has been known to occasionally use slightly too many syllables (see definition of “slightly” above), I’ll admit I was a bit sceptical when this was recommended to me. A book about writing shorter things, from the founders of Axios, a publication which is essentially famous for being short. It seemed a bit self-referential, and I was half-expecting 200 pages of fairly obvious advice dressed up in a lot of white space.

I was wrong, which I know because I went back into Goodreads and changed my rating from three stars to five after I’d been using the techniques for a couple of weeks. That doesn’t happen often!

The core idea is straightforward enough; most professional writing is structured backwards, assumes too much patience from the reader, and buries the thing that actually matters somewhere in the middle. The fix isn’t writing better sentences, it’s changing the structure and style entirely. Lead with what matters, make the “why should I care” explicit, and then let people choose how much further they want to go.

Once you start doing this with emails and presentations, you also start noticing how much of what other people send you is padding. Adopting this method saves time for everyone! I’m not saying we should use this for everything (indeed I’m not), in part as it can be a touch dry at times, but it’s certainly worth a go.

If you want easy mode, you can even take a draft email you wrote and ask your favourite LLM to “rewrite this mail using the Smart Brevity framework from Axios (and the book Smart Brevity by Jim VandeHei et al)”. You’ll be amazed at the difference, indeed you might even want to turn that into a Copilot agent… #justsayin


The Lies of Locke Lamora by Scott Lynch

The Lies of Locke Lamora by Scott Lynch

… and now for something completely different.

My #1 favourite author of all time is the amazing, inimitable and [embarras yourself by laughing out loud on the bus in front of strangers]ly hilarious Sir Terry Pratchett. Sadly, Sir Terry is no longer with us, and if you havent read any of his books, just start with “Guards, Guards” and thank me later! Since his passing I’ve been sampling various authors to try to capture a touch of that genious. The Lies of Locke Lamora is probably the closest thing without actually being remotely copy-cat. It’s simply brilliant and I want more people to have read it!

Locke Lamora is a con artist living in a city that feels like Renaissance Venice if Venice had slightly more casual poisoning, and a thieves’ guild with genuinely baffling rules about which parts of the nobility you’re allowed to actually steal from. He and his crew run increasingly elaborate and inadvisable schemes against the city’s upper class, which works out reasonably well, until it very much doesn’t.

I don’t want to say too much about the plot because there’s a moment roughly around page 126, that I didn’t see coming at all, and being told about it in advance would genuinely ruin it! 😉 What I will say is that the dialogue is sharp, Father Chains is now one of my all time favourite characters from any book, and the whole thing has an energy that keeps you reading when you should probably be sleeping.

I’ve read all of the brief series to date and am now [im]patiently waiting for the next one, which has been delayed by several years already! If you have Scott Lynch’s contact details, please send them over.


BookBytes goes out [if the moon phases are fully aligned] on the first or second Tuesday of each month, space-time continuum permitting. Recommendations, abuse, and “you missed an obvious one” notes are all welcome.

AI, BookBytes, Books , , , , , ,

Half a Second Saved the Internet

One of my favourite activities on a relaxed weekend morning is watching a couple of Veritasium and B1M videos. Last week Veritasium published a superb video on a [not so] simple Linux exploit, that could have had HUGE ramifications. If you haven’t seen it, go watch it, I’ll wait. It’s actually one of the most fascinating, yet little known stories in recent tech history, and it sits right at the intersection of many of the things that interest me; open source, trust, the humans behind the software, and just how fragile much of it really is.

A Lone Maintainer

If you CBA watching the video (seriously though, you really should, just put it on 1.5x!), here’s the short version. A lone developer called Lasse Collin maintained a compression library called “XZ Utils”, quietly and unpaid, for roughly twenty years. You’ve almost certainly never heard of it, which is basically the point. It sits underneath an enormous amount of critical infrastructure, including (most importantly) SSH, that millions of servers rely on every single day. Nobody thinks about it, nobody talks about it, and for most of its life exactly one person was keeping the lights on…

Lasse, who was already burned out and struggling with his mental health, was being hounded by “accounts” to make more progress on the project, some messages encouraging him to accept help and hand over responsibility to other devs. Then someone calling themselves “Jia Tan” showed up, which we now know was almost certainly a nation state operation, and spent two and a half years patiently social engineering their way into becoming a trusted maintainer of the project. They were helpful, responsive, wrote good code, etc. All seemed peachy! Enter Rich Jones, who works at RedHat, packaging Fedora. He began to trust Jia because… well, because Jia behaved exactly like the kind of person open source desperately needs. That’s what makes social engineering so effective; the good behaviour is the attack.

A Backdoor to the Internet

The backdoor they slipped in was technically brilliant and horrifying in equal measure, a hidden compromise buried in the compression library that would have given someone a backdoor to a significant chunk of the world’s servers if it had made it into stable releases across major Linux distros. The whole thing eventually unravelled because a single Microsoft developer called Andres Freund noticed that his SSH logins were taking half a second longer than they should and decided to dig into why. Five hundred milliseconds stood between us and a catastrophic supply chain compromise, and one curious engineer is the reason we caught it.

Open Source Matters

I’m pro open source, always have been. I’ve been using Linux since the 90s, which probably gives you an idea of what colour my beard is. The concept of “Software should be free and we’ll prove it works” is unarguably one of the greatest ever human endeavours. When geopolitical tensions seem to ratchet up weekly, where we’re supposedly retreating into blocs and borders, the fact that open source still works is genuinely remarkable and something to make you proud of bring an ape descendant! It’s proof that like-minded humans can can continue to collaborate on a global scale.

But here’s where we have a challenge… Linus’s Law says that given enough eyeballs, all bugs are shallow. That’s true for projects that actually have enough eyeballs, but popularity does not equal scrutiny. XZ Utils had millions (perhaps billions) of installs, but for most of its life basically one person was reading the code, and then there were two, and the second one was the attacker. (Yes, I know, hyperbole, but you get the point!). Downloads are not eyeballs, installs are not audits, and I think we’ve been confusing usage with oversight for a long time. The XZ story is a great example of where it went both brilliantly right and very wrong.

The real vulnerability here wasn’t technical; it was a system that let a single unpaid maintainer carry critical infrastructure on his back for two decades without meaningful support. We collectively built our digital world on top of someone’s volunteer labour and then acted surprised when that became an attack surface!

Will AI Make This Better or Worse?

Which brings me to the point of this article.

Can AI help with this?

[… and yes, it’s AI again! You now have permission to roll your eyes about having read YAAIA, aka yet another AI article]

In theory, AI-powered code review could potentially spot the kind of obfuscated changes that slipped through here. Automated analysis tooling that never gets tired, never gets burned out, never feels social pressure to approve a commit because the submitter has been so helpful lately. That sounds very promising, but there’s another side to it. As more code gets written by AI and reviewed by AI, do we end up with even fewer human eyeballs on critical paths? Do we create a new kind of “enough eyeballs” fallacy where the eyeballs are all artificial and share the same blind spots?

I genuinely don’t know the answer, and I suspect the truth is that AI will simultaneously make some attacks harder and others easier [really helpful insight, I know!].

Interesting times in Security

Interesting Times

What I do know is that the XZ story deserves to be told widely, especially within our industry. Absolutely not as a cautionary tale about open source, because closed source has its own risks and horror stories that just happen behind closed doors.

To me, it’s a reminder that the humans behind the code matter as much as the code itself. Fund the hard working maintainers, buy them a ko-fi, support the people doing unglamorous work, and maybe, occasionally, investigate when something takes half a second longer than it should.

As my favourite author of all time, Sir Terry Pratchett reminded us, “may you live in interesting times”. As the world tries to keep up with supply chain security, I can confirm interesting times are in the current sprint…

PS – Amusingly as I was writing this, another article on a similar topic hit the headlines, about FOSS repos. Worth a few minutes of your time too I reckon.

Linux, Security , , , , , , ,

We Need an AI Strategy

I work with a lot of organisations on cloud and technology strategy, and there’s a phrase I’ve been hearing with increasing regularity over the last three years. It comes from the boardroom, from the CEO, occasionally from a CFO who’s just come back from a conference looking slightly panicked…

“We need an AI strategy.”

It sounds urgent, and of course it is urgent in a way, but I’ve come to realise it almost always means two completely different things depending on who’s saying it and why. Getting your head around that gap is probably the most useful thing you can do when it lands on your desk.

When a board or CEO says “we need an AI strategy,” they’re almost never asking for a technical roadmap. What they’re really after is reassurance; that the organisation isn’t falling behind, that investors won’t ask an awkward question at the next AGM and get a blank stare in return, that somebody somewhere in the building has thought about this. They want something credible and coherent, ideally fitting on a slide that nobody has to squint at.

When an IT leader hears it, though, they hear something quite different. They hear “investment in data infrastructure.”, and they’re absolutely right to, because without a solid data foundation, you’re building your AI solution on quicksand.

I’ve made a silly analogy of this a few times over the years, something along the lines of how you can’t build anything massive on sand, apart from maybe the pyramids. Then I was in Egypt with my family last year and discovered something I never knew before… even the pyramid builders knew you can’t just stack stones on sand! The pyramids actually have foundations of bedrock and mahoosive limestones buried up to 6-8 metres deep! I’d been proving my own point for ages and I hadn’t even realised…

The Step Pyramid at Saqqara, Egypt
The Step Pyramid at Saqqara. Even 4,500 years ago, they knew you needed proper foundations.

Two documents are better than one…

If you’re the IT leader with this request, my suggestion is that you probably need two documents rather than one, because the audience for each is completely different and trying to serve both at once almost never works.

The first is a exec board narrative. Short, strategic, and confident, focused on [most importantly] business outcomes, then perhaps competitive positioning, and risk. This is the information the CEO walks into an investor meeting with, and it needs to tell a story about where the organisation is heading without getting into how the plumbing works. Nobody on the board wants to hear about the optimised data lake architecture, and frankly they shouldn’t have to.

The second is the working strategy. This is the real one, with the investments, the data programme, the build-vs-buy decisions, the governance model, and probably a spreadsheet or twelve attached. It’s the document your teams will actually deliver against, and it has to be brutally honest about what state your data is in right now and what needs to happen before any of the exciting AI stuff becomes real.

Mashing these two together won’t work. Either it ends up so vague, it can’t actually drive decisions, or the working strategy gets presented to the board and you watch the room quietly die somewhere around slide seventy-eight [we’ve all been there!]. The CEO needs to calm investors, the IT team needs to know what they’re actually building, and one strategy document genuinely can’t do both.

Start With the Bedrock

Before writing any document, the most valuable thing you can do is take an honest look at your data. Not a new procurement exercise, not an AI pilot, just a straightforward answer to whether you could actually build something real on top of what you’ve got today.

In most organisations, the answer is “it’s complicated,” and that’s absolutely fine as a starting point, indeed nobody expects perfection [or indeed the Spanish inquisition!]. The outcome you really want to avoid is cracking on with the strategy without asking the question, then discovering six months later that the foundations aren’t there and having to retrofit them while everything else is already going up around you.

The ancient Egyptians figured that bit out about four and a half thousand years ago. There’s probably a lesson in there somewhere… 🔺

AI, Cloud , , ,

5 Things Running an OpenClaw Personal AI Agent Taught Me (The Hard Way)

The tool I’ve been tinkering with just made headlines. Peter Steinberger, creator of OpenClaw, is joining OpenAI to “drive the next generation of personal agents.” Sam Altman called him “a genius.” Not bad for an open source project only weeks old…

I’ve been running OpenClaw as a personal AI agent for several weeks now (in very strict isolation). It handles a standalone calendar, sends me reminders, processes emails I forward to it, manages my task lists, writes code and makes commits to specific projects I’ve shared from mine, to its GitHub account. Think Jarvis, but with more cron jobs and less Robert Downey Jr. Along the way I’ve learned a few things the hard way. I already posted about some of those last week, and here are five more.

AI agent setup

1. Your AI Will Forget Unless You Make It Remember

This one caught me off guard. Every time your agent starts a new session, it wakes up with absolutely no memory of what you did yesterday. None. It’s like your intelligent, funny, witty teenage child, who wakes up every morning with no memory of you reminding them to clean their room tomorrow…

The fix is decidedly low tech, namely markdown files! The agent reads a MEMORY.md file at the start of every session for long term context, plus daily “summarising” note files for recent history. Without these, every conversation starts from zero. You find yourself re-explaining the same decisions, the same preferences, the same project context. It can be quite frustrating to say the least!

In short, if you want your AI to know something tomorrow (or even immediately following a /reset), write it down today. In a file. On disk. Like it’s 1995 (only not floppy)…

2. Silent Fallbacks Will Eat Your Budget

Here’s a fun one. I set up a simple reminder cron job. Simple task, should cost fractions of a penny. I configured it to use Google Gemini 2.5 Flash Lite, a super-fast, super cheap model. Perfectly adequate for “tell Alex to [insert reminder here].” (side note: mine has permission to do so “with attitude” if need be!).

What I didn’t click was that by default when Google rate-limited Gemini (I was using the free version to start), the system silently fell back to Opus, Anthropic’s most expensive model. My bedtime reminder, a task that could run on a calculator, was burning through premium AI tokens! I only found out when I was looking at some failed, rate-limited tasks. The bot didn’t think this would be something worthy of proactively letting me know. No warning, no alert. Just a quiet, expensive upgrade.

Check your fallback chains, then check them again. Then check them after each time you do an upgrade (which has on one occasion so badly broken the contexts and channels for the gateway, it forgot who it was and I had to restore from a backup!).

Finally, setup a monitoring page on your “Mission Control”. You should definitely build one of these – a bot-built small webapp for managing and monitoring your bot, e.g. here’s mine at the moment:

3. The Hidden Cost of “Good Enough” Model Defaults

Related to the above, but subtler. Not every task your agent performs needs the flagship model. Heartbeat checks, health pings, simple notifications: these can run on the cheapest model available. I’ve got simple jobs running on Gemini Flash Lite, which costs almost nothing. Meanwhile, many of my cron jobs were defaulting to models ten times the price for work that was just as simple.

Match the model to the task. Your “send me the weather” job doesn’t need the same brain as your “analyse this quarterly report” job. It sounds obvious, but so do many things!

4. Your AI Anchors on Context, Not Facts

This is the one that properly messed with my head. I spent a long evening debugging cron job issues. Hours of back and forth, pasting logs, tweaking configs. All the conversation context was about problems from Tuesday. By the time we finished, my agent was convinced it was still Tuesday.

It was Wednesday.🤦

The model doesn’t always know what day it is from an internal clock (you know – those things that have been in computers for decades…). It appears to infer “reality” from the conversation window. If your context is full of Tuesday’s problems, Tuesday is reality. This has real consequences when you’re scheduling things, setting reminders, or asking “what’s happening tomorrow”. I’ve seen this happen many times in different scenarios, including pre-scheduled morning briefs based on the wrong day and scheduling cron jobs for the wrong day and time.

Your AI’s sense of the world is only as good as the context you’ve given it, and context can lie. Once again I recommend a Mission Control to easily eyeball things occasionally.

5. Trust Logs Over Vibes

At one point I asked my agent which model it had used for a particular task. It confidently told me Sonnet, but the logs showed Opus (via fallback). The model wasn’t lying exactly… it just didn’t know. It reported what it thought was true based on its configuration, not what actually happened at the infrastructure level.

This applies broadly. Just like any chatbot or web-based bot you’ve been talking to for the last 3 years, your AI will sound confident about things it cannot possibly verify or doesn’t want to as it might be a wasteful activity, so it just goes with what it has in context. System level behaviour, actual API calls, real thing live in logs, not in chat responses, so when it matters always go to the source.

(I think of this as the “Did you really brush your teeth? Shall we go check if the brush is wet?” scenario).

What Next

Peter Steinberger’s very high profile move to OpenAI tells you where this is heading. Personal agents aren’t a nerdy hobbyist curiosity anymore, they’re absolutely going mainstream. OpenClaw will continue as open source via a foundation, which is great, but the bigger signal is that OpenAI wants this expertise in house. They’re betting that millions of people will be running agents like this, and you would imagine soon, by default they’ll run on Codex.

When that happens, every lesson I’ve learned will be demonstrated at global scale. People will inevitably burn money on cheap tasks, wonder why their agent forgot last week’s conversation and trust a confident response over a log file.

If you’re thinking about running your own agent, start now while it’s still a bit rough around the edges. The lessons are cheaper to learn on an old laptop in your cupboard (on an isolated network, with isolated accounts!), than in production and connected to all your company’s systems!

Now I’m off to go and find some 10 year old memory DIMMs to sell for a 400% markup.

💻💰🎉

PS – If you got this far, thanks for reading, and I apologise for the rather click-baity title! Don’t hate the player, etc…

AI, Web , , , , , , , , ,