Running into the OpenClaw Founder at a Hackathon: What Else Can “Lobsters” Do?
Running into the OpenClaw Founder at a Hackathon: What Else Can “Lobsters” Do?
In early March 2026, London briefly felt like the capital of “agentic everything”. The UK AI Agent Hack x OpenClaw Special Edition landed at Imperial College London with a 7‑day build sprint and a Demo Day finale, drawing 1,200+ builders into the same arena of tooling, workshops, and shipping pressure. The official agenda made it clear that this wasn’t just another campus hack: it was designed to turn AI agents into something closer to production systems, not demos that die on Monday. (See the event timeline and format details on the UK AI Agents Lab hackathon page.)
If you spend your days in crypto, you’ve probably already noticed the subtext: AI agents are becoming economic actors. And once software starts doing real work, it needs three things the crypto stack is unusually good at providing:
- Ownership (who controls the agent and its outputs)
- Payment rails (how it pays, gets paid, and settles globally)
- Verifiability (what it did, when, and under which permissions)
At the hackathon, OpenClaw agents were affectionately nicknamed “lobsters” by the community. The joke is cute, but the implications are serious: in Web3, a “lobster” can become a wallet user, not just a chatbot.
Why this hackathon matters to Web3 builders
According to the event overview, the OpenClaw‑centered edition ran March 1–7, 2026, with an Opening Conference, hands‑on workshops, and a Demo Day judged by investors and industry guests. The scale (and the deliberate focus on real integrations) explains why so many crypto builders were paying attention. (Event structure and dates: UK AI Agents Lab EP.4.)
Historically, crypto hackathons produced new primitives—DEX aggregators, L2 tooling, account abstraction wallets, MEV‑aware infrastructure. In 2025, the trend shifted: builders increasingly asked “How do I make on-chain actions usable by non-experts?” In 2026, the next question is: “How do I make on-chain actions safe for non-humans?”
An AI agent with tool access can:
- read documents and dashboards,
- call APIs,
- push transactions,
- manage positions,
- and coordinate with other agents.
That’s no longer “prompt engineering”. That’s operations—and operations require security boundaries.
The founder’s crypto skepticism is part of the story, not a contradiction
One widely discussed tension is that OpenClaw’s founder has been portrayed as cautious about crypto participation, even while the ecosystem around OpenClaw collides with Web3 experimentation. This “push-pull” shows up clearly in community reporting: crypto builders see agents as natural consumers of stablecoins and on-chain identity, while some agent builders worry that speculation distracts from real utility. A useful snapshot of that debate is captured in Odaily’s coverage of the OpenClaw wave and its Web3 adjacency. (Background and community viewpoints: Odaily report on the OpenClaw boom.)
For Web3 teams, the takeaway isn’t to force-fit tokens into everything. It’s to recognize a more practical arc:
- Agents start as internal productivity tools
- Then become autonomous service providers
- Then need programmatic payments, audits, and permissions
- Only then does tokenization become a design option (not a default)
What else can “lobsters” do in crypto? Five real directions
Below are five agent capabilities that matter specifically in blockchain and crypto—and how builders are starting to approach them.
1) Agentic payments: turning stablecoins into an API
The biggest unlock isn’t “an agent that trades”. It’s an agent that can pay—for data, inference, compute, subscriptions, bounties, or human labor—without a bank integration sprint.
A concrete industry signal here is the emergence of agent-friendly payment protocols that make stablecoin settlement composable with agent-to-agent workflows. Coinbase’s write-up on x402 frames this as “agentic commerce”: agents that can coordinate and settle value flows in the same loop. (Overview: Coinbase on x402 and agentic payments.)
In practice, this enables:
- pay-per-call data feeds
- automated SaaS subscriptions with spend caps
- machine customers buying machine services
- micropayments for content, inference, and APIs
This is the less-hyped but more durable version of “AI x crypto”.
2) On-chain “delegation” instead of handing an agent your keys
If an agent is going to transact, private keys are the wrong abstraction. What you want is delegated authority:
- limited-time permissions
- limited-amount spending
- allowlisted contract interactions
- revocable session keys
This is where account abstraction becomes more than UX—it becomes agent safety engineering. ERC‑4337 formalizes an approach to smart-contract wallets that can implement programmable validation and paymaster flows without changing Ethereum’s consensus. (Primary reference: EIP‑4337.)
A well-designed agent wallet stack can:
- keep a “root” authority offline,
- issue narrow session keys to the agent,
- enforce policy checks before any on-chain action,
- and revoke instantly if the agent misbehaves.
That’s how you let a “lobster” work without letting it own you.
3) Verifiable execution: proofs, logs, and accountability
In Web2 automation, the “audit trail” is whatever your SaaS vendor decides to expose. In Web3, we can do better:
- transaction traces are public
- signatures are attributable
- state changes are inspectable
- incentives can reward good behavior and punish abuse
This opens the door to agent compliance-by-default, where an agent’s action history becomes part of its credibility. Over time, this can evolve into:
- on-chain reputation for autonomous service providers
- escrow-based payments released on verifiable milestones
- dispute resolution anchored by immutable logs
The crucial point: crypto turns agent behavior into something you can verify, not just trust.
4) “Agent economies”: agents that earn, not just spend
The most interesting “lobster” isn’t one that executes your tasks—it’s one that runs a small business:
- sells a service (research, monitoring, routing, execution)
- gets paid in stablecoins
- pays for its own compute and data
- reinvests into better tooling
Odaily recently highlighted multiple OpenClaw-adjacent projects experimenting with agents that already generate revenue and coordinate work, pointing toward a broader “agent economy” narrative. (Examples and framing: Odaily on OpenClaw x Crypto projects.)
Even if you ignore tokens entirely, the economic loop matters because it forces discipline:
- measurable output
- measurable cost
- measurable security risk
- measurable ROI
That’s exactly what the AI agent space needs to escape the “cool demo” trap.
5) DeFi automation—only if you treat it like production trading infrastructure
Yes, “lobsters” can do DeFi:
- rebalance
- manage LP ranges
- monitor borrow health factors
- execute intent-based swaps
- run treasury rules
But this is also where agents become dangerous fastest, because DeFi is adversarial and composable. If your agent can sign, attackers will try to:
- prompt-inject it into approving malicious calls
- trick it via poisoned webpages or tool outputs
- drain funds through “helpful” automated steps
Security research is increasingly explicit that tool-enabled agents introduce new exploit surfaces: prompt injection, unsafe tool calling, data exfiltration, and costly failure modes. (See, for example, the OpenClaw-focused security analysis on arXiv: “Don’t Let the Claw Grip Your Hand”.)
So the right mental model is: an agent is an untrusted employee with superpowers. You don’t give that employee the treasury root key.
The security baseline: how to let an agent touch crypto without getting wrecked
If you’re building (or using) an agent that interacts with wallets, smart contracts, or exchanges, these are the minimum guardrails that keep “autonomy” from turning into “accident”.
A practical architecture (used by serious teams)
-
Cold authority (human-controlled)
- Root keys stay offline
- Used only for configuration changes and high-value transfers
-
Hot agent wallet (policy-restricted)
- Small balances
- Spend limits
- Allowlisted contracts and methods
- Short-lived session keys
-
Simulation-first execution
- Pre-flight simulation before broadcast
- Fails closed if outputs differ from expectations
-
Human-in-the-loop for anything irreversible
- New addresses
- New contract approvals
- Large transfers
- Permission changes
Why hardware signing still matters in an agent world
The more “autonomous” software becomes, the more valuable it is to keep final authorization physically separated from that software.
A hardware wallet can serve as the last checkpoint where:
- the private key never touches the agent machine,
- the user sees what is being signed,
- and phishing or malicious approvals are easier to catch.
If you use OneKey in this workflow, the fit is straightforward: OneKey is built around offline private key protection, and its software stack emphasizes anti-phishing and clearer transaction understanding—features that become more important when an AI agent is preparing transactions on your behalf. One public reference that summarizes these protections is the OneKey Wallet listing on the Chrome Web Store.
The real lesson from the hackathon: crypto’s job is to make agents safe economic actors
The UK AI Agent Hackathon EP.4 page includes an amusing community rule—“no crypto talk”—which is ironic given how often Web3 shows up as the missing piece in agent infrastructure conversations. (Rules and context: UK AI Agents Lab EP.4.)
But the deeper truth is:
- AI agents are rapidly gaining capability
- crypto provides constraints
- constraints are what transform capability into reliable systems
In 2025, crypto’s biggest user obsession was UX (abstract gas, simplify signing, unify chains). In 2026, a new obsession is emerging: safe autonomy—how to let software act without letting it steal, leak, or self-destruct financially.
“Lobsters” can absolutely do more than write code or browse the web. In crypto, they can become:
- payers and merchants
- DAO participants
- treasury operators
- compliance-aware executors
- autonomous service businesses
But only if we build the permissioning and custody layers as carefully as we build the models.
Closing thought: don’t give your lobster the crown jewels
If you’re experimenting with OpenClaw-style agents in Web3, start small:
- isolate wallets,
- cap spend,
- use account abstraction or delegated keys,
- require hardware confirmation for high-risk actions,
- and treat every tool output as potentially hostile.
Autonomy is coming either way. The opportunity for crypto builders is to ensure that when agents start moving value, they do it under rules humans can verify—and revoke.



