AI agents become persistent, autonomous and deeply embedded in daily workflows. But as they gain the opportunity to act on our behalf, a more difficult question arises: who controls the data, the execution and the trust layer?
—
Today, $NEARBY AI introduced his answer. Announced live at NEARCON 2026, IronClaw is a new open-source, verifiable AI agent runtime designed for a future where agents run continuously – without exposing sensitive data, credentials, or user intent.
A runtime built for autonomous AI – without blind trust
IronClaw builds on the original OpenClaw vision, but strengthens it from the ground up with cryptographic guarantees. Written in Rust and deployed in coded Trusted Execution Environments (TEEs) at $NEARBY With AI Cloud, the runtime, AI agents can access tools, maintain memory, and take actions on behalf of users – all within a tightly controlled security boundary.
Instead of asking users to trust opaque platforms, IronClaw shifts the trust model to verifiable execution. Data and inferences remain protected at the hardware level, and agents operate under explicit, enforceable permissions.
Security through architecture, not add-ons
IronClaw is designed with defense in depth as its core principle.
Loading tweet…
View Tweet
Each untrusted or third-party tool runs in its own sandbox, limited to only the resources it is explicitly authorized to access. Network calls are limited to approved destinations. Sensitive credentials are only injected at runtime and never directly exposed to tools or external services.
Agent activity is continuously monitored to detect misuse, including protection against prompt injection attacks and resource misuse. All user data is stored locally in PostgreSQL, encrypted with AES-256-GCM and never shared externally. Importantly, IronClaw collects no telemetry or analysiswhich keeps the performance completely private.
A complete audit log gives users insight into every tool interaction: transparency without supervision.
Privacy-First AI, ready for deployment
IronClaw launches with a free Starter tier that runs a single hosted agent instance $NEARBY The secure environment of AI and enabled by the inference infrastructure. Developers and organizations can scale through flexible paid tiers as their needs grow.
The goal isn’t just safer officers; it’s also a practical implementation without forcing teams to choose between convenience and control.
Loading tweet…
View Tweet
Why this matters
As AI systems increasingly serve business incentives and rely on opaque data pipelines, IronClaw represents a different direction: local control, verifiable execution, and privacy by default.
Illia Polosukhin, co-founder of $NEARBY Protocol and founder of $NEARBY AI described IronClaw as an “agentic suit of armor designed for security” that stretches $NEARBY‘s full-stack trust model from blockchain infrastructure to the AI layer itself.
Rather than tying security to agent AI after the fact, IronClaw integrates it into the runtime – combining confidential inference, cryptographic verification, and hardware-assisted execution into a single system.
A foundation for responsible agent AI
George Zeng, Chief Product Officer and General Manager of $NEARBY AI put the launch more bluntly:
“AI agents are already entering critical workflows, but security, compliance and data ownership remain unresolved. IronClaw aims to close that gap – giving developers and enterprises the confidence to deploy persistent agents without giving up transparency or control.”
IronClaw is available now, with code accessible at $NEARBY GitHub by AI.
As AI moves from tools to actors, IronClaw signals a clear position: autonomy should not come at the expense of privacy, and intelligence should never require blind trust.
