In a world obsessed with giving AI agents more - more tools, more memory, more instructions stuffed into their context - NonBioS has been quietly doing the opposite. Every character that enters the agent's context has to earn its place. And nowhere is this philosophy more visible, or more powerful, than in how NonBioS handles skills.
What Are Skills?
Skills are a relatively new concept in the AI agent world. At their core, a skill is a structured document - typically a SKILL.md file - that teaches an agent how to perform a specific, repeatable task well. Think of it as a how-to guide written not for humans, but for AI agents. A skill for SEO content writing tells the agent how to structure articles, pick keywords, and match search intent. A skill for UX design communicates current best practices and component patterns. A skill for internet research explains how to call search APIs, extract content cleanly, and synthesize results.
Skills were formalized through the Agent Skills specification, an open standard that makes skills portable - write one skill, use it with any compatible agent. Anthropic's Claude, for instance, supports skills and even ships a public skills repository. The goal of the spec is simple: let communities build up shared libraries of specialized knowledge that any AI agent can tap into on demand.
NonBioS supports the full Agent Skills specification. Any skill built to that standard works with NonBioS out of the box. But there's something important - and very intentional - about how NonBioS engages with skills.
The NonBioS Difference: Skills That Stay Lean
Take a look at the NonBioS skills repository on GitHub. You'll notice something immediately if you compare these skills to others in the ecosystem: they're short. Deliberately, aggressively short.
This isn't a lack of effort. It's a design constraint rooted in a fundamental insight about how NonBioS works.
NonBioS operates with a limited context window relative to the task complexity it handles. Long-horizon software engineering sessions - sometimes spanning hours - require every token in context to be working hard. A skill that rambles through background theory, extended examples, and exhaustive edge cases isn't helpful. It's noise. It competes with the actual task the agent is trying to solve.
NonBioS skills are written to capture only what is truly required for the agent to do the task well. No padding. No redundancy. Just the essential signal. This makes them faster to load, easier for the agent to reason over, and - critically - less likely to crowd out the things that actually matter, like understanding your codebase.
Strategic Forgetting: Why Context Discipline Matters
To understand why this matters so deeply at NonBioS, you need to understand Strategic Forgetting - the proprietary context engineering architecture that sits at the heart of how NonBioS works.
Most AI agents are built on an implicit assumption: more context is better. Feed the agent everything. Let the attention mechanism sort it out. The problem, as the NonBioS team discovered early, is that this approach breaks down badly in long-horizon tasks. The agent doesn't become more capable as its context fills up. It becomes less capable. It starts going in circles. It re-attempts approaches it already tried. It loses the thread of what it's actually trying to build.
Our founder wrote about this directly: when you're deep in debugging a complex system, you naturally filter out background noise. You keep what matters and let the rest fade. You know you can look up the specific error message again. What you need to hold onto is what that error means and what you've already tried. That's exactly what Strategic Forgetting does - it continuously prunes the agent's working context, preserving understanding while releasing raw detail that can always be re-fetched from the environment.
This architecture is what enables NonBioS to sustain coherent, high-quality work over multi-hour sessions. And it's exactly why skills, in the NonBioS context, need to be compact. A bloated skill definition isn't just wasteful - it directly undermines the cognitive discipline that makes long-horizon autonomy possible.
How Skills Work in NonBioS Today
Using skills in NonBioS is explicit and deliberate. You ask NonBioS to learn a skill by pointing it at the skill's source. For example:
Learn skill from https://github.com/nonbios-1/skills/tree/main/skills/seo-content-writing
This is different from how Claude handles skills, where the system may search and load skills automatically in the background. In NonBioS, you're in the driver's seat. You decide when a skill is relevant, you invoke the learning explicitly, and then NonBioS applies it to your task.
This explicit model has a real advantage: it's predictable. You know exactly what knowledge the agent is working with. There's no ambiguity about whether a skill was loaded or which version was used. You asked, it learned, now it knows.
There's also something genuinely exciting here for builders. If you're working on a UX project with NonBioS and you're not happy with the quality of the interface it's producing, you can find the latest UX skill from the community, ask NonBioS to learn from it, and instantly upgrade its capabilities for that session. No redeployment. No configuration files. Just point the agent at better knowledge and continue working. The agent's ability to build UX evolves in real time.
Over time, the we plan to make skill loading progressively more seamless - reducing friction while preserving the underlying transparency about what the agent knows. But for now, the explicit model keeps things clean and auditable.
Skills Were Always Going to Work Here
Here's the thing that's easy to miss: skills work in NonBioS not because the team added special skill support. They work because of a core architectural decision made at the very beginning.
NonBioS uses zero native tool calls. None. Every interaction the agent has with its environment happens through raw shell commands - the same bash it uses for everything else. File edits, test runs, API calls, package installations. All of it. This was a deliberate choice to keep the system prompt lean and tightly controlled. Native tool call schemas are expensive in context. Each tool definition, with its parameter descriptions, type annotations, and usage instructions, costs tokens in the system prompt. NonBioS pays none of that tax.
The consequence? Every command-line tool that has ever existed is already available to NonBioS. curl, jq, grep, sed, npm, pip, ffmpeg, psql - if it runs in a terminal, NonBioS can use it. Skills that rely on CLI tooling just work. Skills that describe how to call external APIs just work. There was no integration effort required because the agent already operates at the level of the shell.
In a world that keeps asking "how do we teach the agent to use more tools?", we started from the premise that the agent already knows how to use every tool. What matters is the intelligence it brings to the work.
The Bigger Philosophy
There's a real tension in how the industry is thinking about AI agents right now. The dominant instinct is additive - more tools, more memory, bigger context windows, richer scaffolding. The assumption is that capability comes from giving the agent more to work with.
We, at NonBioS, has been running the opposite experiment. The insight driving it: when an agent is deep in your codebase, you don't want it remembering the schema for fifty tool calls. You want it remembering how your architecture is set up, what decisions were made and why, and what the current state of the system actually is. Tool knowledge crowds out task knowledge. And task knowledge is what determines whether the work gets done.
Skills fit into this philosophy naturally. A lean skill adds targeted capability without bloating the agent's working context. It's the difference between handing someone a focused reference card and asking them to memorize an encyclopedia.
NonBioS is also fully compatible with skills built by anyone following the Agent Skills specification. The community of skills being developed for Claude, Cursor, and other compatible tools is available to NonBioS users. You don't have to wait for an official integration. Point the agent at a SKILL.md and you're done.
The NonBioS skills repository will continue to grow. The philosophy behind each skill it publishes will remain consistent: short, sharp, and honest about what the agent actually needs to know.