Testing OpenClaw Without Losing Your Mind, Money, or Data

This weekend I decided to test OpenClaw, formerly Clawdbot, formerly MoltBot for about five minutes, (which should already tell you something about the maturity curve)! Not a quick poke around, but leaving it running as an always-on AI assistant and seeing what broke first. After consuming a frankly unhealthy amount of opinions and hype about it, my aim was simple enough; to explore what it can really do without losing my mind, my money, or my data in the process…

Of course I had to include a Red Dwarf meme!

If you somehow missed the hype train, OpenClaw positions itself as an AI assistant that behaves more like a human. It can act proactively, use multiple LLMs as sub-agents, orchestrate tasks, and interact with almost anything on your computer on your behalf. At least in theory. Cue the “beam me up” moment and a strong temptation to believe this might finally be the thing that liquefies your brain in the good way.

That said, there are some fairly substantial security questions still outstanding, particularly around prompt injection and unintended actions, so I approached this very deliberately (unlike some of the horror stories I’ve seen already!). Everything ran in a sandbox, with a carefully limited blast radius, no access to my personal files or accounts, and its own email and calendar. For now, I treated it like a teenager on work experience (keen, capable, occasionally overconfident, and absolutely not to be left unsupervised with anything sharp).

What surprised me most over the weekend wasn’t what it could do, but what it demanded from me in return…

What did I Learn

The first big lesson is that the core capabilities are already solid. File operations, shell commands, and basic system interactions work reliably as long as you give them the permissions they need (and actually understand what you’ve granted). When something went wrong, it was rarely because the system couldn’t do the thing, and far more often because the constraints, context, or guardrails weren’t clear enough.

That quickly leads to the real work, which is teaching. This is not a fire-and-forget setup. You have to approach it like a teacher, because the assistant doesn’t magically infer how you want to work. The upside is that once you explain your processes clearly and correct it when it gets things wrong, it tends to internalise those patterns very quickly (sometimes alarmingly so!). One explicit correction, especially when backed by documented preferences that it stores in workspace md files, often changed behaviour permanently.

Communicating with OpenClaw

Clarity matters more than cleverness. These systems are far better at technical problem solving than mind reading! When I was vague, I paid for it in wasted tokens, odd detours, and solutions that were technically ‘correct’, but practically useless. When I was precise about what I wanted and what I didn’t, the results improved dramatically and stayed that way.

Autonomy turned out to be another important dial. It seems that you need to give an always-on agent enough freedom to be useful, but not so much that it becomes risky or unpredictable. I had the best results when I defined (very!) explicit boundaries around what was safe, what required confirmation from me, and what was simply not allowed. I didnt expect it to be perfect, but it adhered reasonably well to simple, clearly stated rules, which was enough to build confidence over time (and sleep slightly better!). I’m sure there is risk in there of it overstepping some boundaries, so again until this matures, I’m keeping that blast radius as small as possible, even in the worst case scenario.

I also found that you have to accept a certain amount of trial and error. Things don’t always work on the first attempt, and that’s part of the deal. In one case, a system looked “healthy” while doing absolutely nothing useful because it was monitoring the wrong signal. In another, an automation kept hanging because the machine was quietly waiting for a human approval prompt I couldn’t see, despite being on the desktop! These weren’t exotic failures, just the kind you only discover by actually running the thing.

Pro tip: don’t /reset in the middle of a long conversation unless you have to, as it will forget some vital things, but when you finish a piece of work, using /reset will reduce the context window and save you tokens!

Tokens, tokens everywhere…

Tokens deserve special attention, because if you’re not careful, OpenClaw will burn through them with impressive enthusiasm. What worked best for me was treating models as tools with different costs, not interchangeable brains. For routine work and exploratory steps, I have heard good things about cheaper models like Kimi K2.5, that hold up remarkably well. For me, most of the orchestration and day-to-day thinking ran on Anthropic’s Claude Sonnet 4.5, which struck a good balance between capability and cost. When I genuinely needed deep reasoning, I escalated deliberately, rather than by default to Opus 4.5. For code-heavy work, ChatGPT 5.2 performed well and, somewhat surprisingly, survived the token usage better than expected. One practical tip here is to avoid pay-as-you-go APIs unless you enjoy anxiety, and to accept that if you are using this for anything more than extremely light testing, upgrading to the Anthropic Max plan is often the least painful option in the long run (I went for the $100 equivalent tier for this month and it’s holding up well so far).

One habit that paid off faster than expected was documenting as I went, or better yet, having the bot do it for me! Notion worked particularly well for this, and having a written record of decisions, preferences, and fixes turned out to be just as valuable as the automation itself (future me will almost certainly agree).

The Verdict

Stepping back, OpenClaw is clearly bleeding-edge software with some rough, bordering on paper-cut level edges! Some operations are fragile, the learning curve is real, and you are very much learning alongside the system rather than using a polished product. If you’re technical, patient, and comfortable in a terminal, it’s genuinely astounding, and will surprise you regularly (in a good way!), but it’s not going to be a totally smooth ride, and it’s absolutely nowhere near ready for the general public to safely play with!

I also discovered more about ways to use ChatGPT or Claude independently of OpenClaw that I wasnt previously aware of. Simple things like scheduling reminders and information to be sent to you later is all feasible, which actually negates some of the reason (and therefore risk) of using OpenClaw, depending on your use case.

After three days, my takeaway is that an AI assistant isn’t magical or terrible; it’s extremely useful. It’s definitely more like that teenager on work experience, except they rarely forget anything, work continuously, and execute precisely once you’ve taken the time to teach them how you operate (but still under strict supervision!). The value isn’t in a single impressive feature, but in the compound effect of lots of small, but amazing things.

I’m cautiously optimistic.

Ask me again in a couple of weeks.

AI , , , , , , , , , ,

Is the Cloud actually greener?

This week, I returned from an amazing family adventure holiday in Morocco, where the country’s wonderful culture and fascinating history made it (I hope!) an unforgettable experience for my kids. However, recent droughts there have had severe consequences on the country’s agriculture, economy, and water resources. Reduction in rainfall over the past two years has impacted crops, increased food prices, and water scarcity, affecting millions of people and raising concerns about long-term sustainability.

During one of many hours on the minibus, travelling between regions, my family asked me about the cloud and what impact it has on the environment. This has obviously been a massive topic over the past few years, prompting the hyperscalers to take a very public stance on the matter, for example, the re:Invent 2021 sustainability announcement by AWS.

We all know that cloud computing has become an essential part of modern life, changing the way we work, play, and communicate arguably faster than any other time in history! I would suggest that there are a huge number of sustainability benefits to adopting the cloud, but that doesn’t mean it’s environmental impact is zero. As with all things, we should be looking at the pros, cons and mitigations.

Just some of the Pros

The cloud allows businesses to reduce energy consumption and hardware waste significantly. By using shared cloud resources, organisations can get rid of their low-utilisation, on-premises hardware footprint, unused redundant kit for HA and DR, etc, all of which requires electricity, cooling, shipping, maintenance, etc. Cloud providers typically use state-of-the-art, energy-efficient data centres with huge economies of scale to minimise the overall carbon footprint.

Speaking of which – economies of scale! Hyperscalers benefit from massive economies of scale, making it more efficient for them to build, manage and maintain data centres. They have the budgets to invest in advanced technologies and energy-efficient infrastructure, leading to a lower environmental impact compared to small-scale, on-premises solutions (of even traditional colo).

On-demand scalability in the cloud allows organisations to optimise resource utilisation and remove the need for over-provisioning of hardware for peak demand or HA/DR. This not only reduces waste, ensuring only necessary compute resources are used, but reduces the TCO and frees up budget to be used elsewhere!

Something perhaps overlooked at times is that the cloud increasingly enables remote working, thereby providing better work/life balance for people and reducing the environmental impact of commuting. Greenhouse gas emissions from vehicles have a massive impact, which (especially in temperate countries) can be mitigated by more work from home. Furthermore, with the ubiquity of 4G and 5G mobile communications, this provides access to compute resources from remote locations where they would not have otherwise been available. This will likely increase utilisation and impact, but will help people all over the world benefit their lives and will likely lead to further innovation that will benefit the environment.

Lastly, as bonkers as it is to even needing to remind people of this in 2023, cloud computing virtually forces users to adopt virtualisation, utilising resources far more efficiently than traditional full-fat tin. It’s mind boggling how many companies are still uncomfortable virtualising heavy workloads such as databases today, despite all of the classic concerns being mitigated.

Remote village in Moroccan mountains

A Few Risks

The largest risk, but possibly the one which may have its own mitigations, comes from increased adoption. The increasing popularity of cloud computing means that the demand for data centre resources is rising massively. As more businesses move their operations to the cloud, energy consumption of centralised, cloud data centres will continue to grow (whilst reducing that of local), but beyond that, the innovation of all those very clever humans who have found new ways to utilise this new technology is likely further driving up utilisation beyond our traditional baselines.

The location choice for data centres can have a significant impact on the environment. In regions where electricity is generated using fossil fuels, cloud computing indirectly contributes to higher greenhouse gas emissions, and cooling data centres in hot climates can be super energy-intensive. If data sovereignty is not an issue, then utilising compute regions close to natural energy / cooling can help to mitigate this.

Inefficient development practices and code bloat further add to the risk landscape. The availability of virtually unlimited resources in the cloud may inadvertently reduce the drive for developers to write efficient code. Promoting clean development practices and optimisation is essential to minimise energy consumption. We should be fostering a culture of efficiency and sustainability right from the early stages of developer education to ensure this issue doesn’t continue to creep into the cloud. The growing trend of of microservices architectures may actually help here, encouraging developers to think small and efficient modules, but that remains to be seen!

One of the fastest growing users of energy and hardware is Cryptocurrencies. The massive amounts of power used to not only generate new coins, but also manage transactions on the chain, are a significant concern. Dedicated crypto hardware, such as ASICs, can help reduce energy consumption and those specifically designed for cryptocurrency mining are more energy-efficient compared to general-purpose hardware like GPUs. I would hope that the miners will adopt these more, if only for their own benefits, if not for the environment!

TLDR

So to respond to the question posed by the title of this post, I believe the answer is yes, but there are some key considerations to ensure it remains so.

To make cloud computing a truly sustainable solution, we need to advocate for the use of renewable energy sources by cloud providers and the drive for net-zero carbon emissions in our cloud platforms (not just through buy carbon credits, but through actual change). Harnessing solar, wind, and hydroelectric power can enable cloud providers to decrease their dependence on fossil fuels and shrink their carbon footprint, but this will always be region-specific and impacted by data sovereignty regulations.

As consumers of the cloud, we have a crucial role to play by opting for cloud service providers that prioritise eco-friendly practices as well as adopting those ourselves, from architecture to development, fostering a culture of well-architected sustainability in our own organisations.

AWS, Cloud , , , , , , , , ,

TekBytes #5: The Current State of Cloud Security

Discussing the concept of Cloud Security over breakfast with my kids (yup – poor kids I hear you say!), I was thinking about the current state as one of constant (and accelerating) evolution and improvement. As more businesses adopt cloud computing, the need for robust and effective security measures has become increasingly important. While cloud hyperscalers have made significant investments in securing their platforms, the responsibility for implementing and maintaining effective security measures ultimately falls on customers or those they entrust to manage their platforms on their behalf.

Challenges

There are many challenges that businesses face when it comes to cloud security and far too many to go into in a TekBytes thought of the day, but let’s look at a few.

One major challenge is the lack of visibility and control over the infrastructure and data that are hosted in the cloud. This can make it very difficult to identify and address security vulnerabilities and threats. Another challenge is the complexity of cloud security, which can be exacerbated by the use of multiple cloud providers, each with their own security protocols and standards. Finally, we have a huge lack of skills in the market, and those few people with the skills are constantly being tempted by offers of outrageous salaries, so retaining your talented teams is really tough!

Despite these, there have been really significant advancements in cloud security in recent years. The hyperscalers have implemented many new security measures, such as encryption, improved access controls and policies, significantly better monitoring tools, to help protect their platforms and their customers’ data. Post-Covid, with customers moving to the cloud in even larger numbers, it’s also great to see that customers have become more aware of the importance of cloud security and are taking steps to prioritise it.

The threat landscape for cloud security continues to evolve, with new and extremely sophisticated attacks emerging all the time. Businesses need to keep up and be proactive in their approach to cloud security.

Tips

So, a couple of quick tips to think about if you haven’t already started taking your cloud security seriously?

  1. Implement multi-factor authentication (MFA). A bit like when you hear sports commentators or coaches talking about a losing team, the common thread is simply not doing the fundamentals / basics well. One of the most effective ways to improve cloud security is to require MFA for all users accessing cloud resources (not just root). Lack of MFA is like leaving your car door unlocked and crying out to have your vehicle taken for a Ferris Bueller-style joy ride!
  2. Regularly review and update security policies. It’s important for businesses to regularly review and update their security policies to ensure they are aligned with current best practices and standards, and these best practices are constantly evolving. Things like access controls, password policies, data encryption, and incident response plans. By keeping security policies up-to-date and ensuring that all employees are aware of them, businesses can significantly reduce the risk of security breaches.
  3. Investigating the used of third-party security tools and services. Tools (if properly implemented) provide additional layers of protection, such as threat detection and monitoring, vulnerability scanning, data encryption, etc. Engaging security experts one-off or regularly to provide recommendations for improving their security posture, or simply outsourcing management of their cloud estates.

I’m genuinely hopeful that the emerging (and frankly astounding) improvements in artificial intelligence will have a positive and significant impact on businesses who don’t or can’t spend the time and resources to protect themselves and their customers effectively. If they don’t we’re only going to see a proliferation of more high profile and high impact cases in the news!

Cloud , , , , , ,

TekBytes #4: Why I’ve Switched to Simple Markdown in WordPress

I’m always looking for ways to improve my workflow and productivity; most recently I’ve started using Markdown for as many projects as I can, so using Markdown in WordPress is no exception!

If you haven’t seen or used Markdown before; it’s a super-lightweight markup language that allows you to add formatting elements as you go by using special syntax. For example, if you want to make a section of text display in italic, simply put an asterisk at the start of the word or sentence. When the output is then parsed, your *Italics* then becomes Italics).

Why is this useful?
  • Like an XML file, it’s open, super portable and easily readable across many applications, operating systems and web platforms (think Reddit, GitHub, Stack Exchange, Confluence, etc).
  • It saves time when editing simple documents as you don’t have to go back and highlight/modify formatting, just add the syntax as you go, often with the use of one or two characters at the start of a line. Simple examples might be a bulleted list, where you add an asterisk *, or an H1 heading where you add a single hash symbol #.
  • It’s fantastic for writing documentation where you might want to insert a quick code snippet or command just like this!
  • Due to the very simple notation, it’s far quicker than writing HTML and can be substituted for HTML on many publishing platforms.
  • If Git is already part of your workflow, it makes for easy collaboration with others (ideal for Devs!) and you can use GitHub for both version control and easy access from anywhere to your in-progress content.
  • Learning and practising with Markdown opens up future opportunities to move to various publishing platforms such as Jekyll, Hugo, etc. I might even think about giving the site a facelift sometime soon!
  • New skill to master, init?
Markdown Block - Activate Markdown
Using Markdown in WordPress
You’ve sold me! How do I start?

I don’t know why WordPress didn’t enable it in the default code, but it does come as part of Jetpack (which I assume 90% of sensible WordPress Users are using!). If you want to adopt it to your blogging workflow your site too, simply install Jetpack, then enable it under Settings → Writing.

You can find detailed instructions here:
WordPress Support – Enable Markdown in WordPress

Here’s a quick intro to how to use Markdown:
Markdown – Getting Started

After writing a couple of draft posts with it so far, I have to say I’m pretty happy with it. The next step is to find a decent plugin for linking to GitHub, enabling me to write/edit posts in my favourite text editor (VSCode, Sublime Text, Atom depending on my mood), then push them up via Git!

Scripting , , , , , ,