Projects

Eighteen months of building creative systems, multi-agent architectures, and institutional-scale reasoning tools.

Act I — Creative Proof

(2024 — Early 2025)

We began with creative experiments to understand whether structured tensions could produce outputs beyond the reach of single voices.

Oct 2024

Zafiro Cycle — Poetry Series

AI-assisted poetry exploring identity, migration, and human—machine symbiosis across six volumes.

Nov 2024

MAAG v1.0 — Multi-Agent Article Generator

Early system for generating structured articles through role-based collaboration.

May 2025

The Last Algorithm — Novel

Techno-thriller written via deliberate AI—human reasoning loops.

Act II — Architectural Extraction

(Mid 2025)

The creative proof revealed the underlying patterns that would become our multi-agent architecture.

2025

MAGI v5

A structured role-based framework enabling auditable multi-agent reasoning.

2025

Transparent Clockwork

A philosophy and visual metaphor to articulate multi-agent clarity and interpretability.

2025

Decision Engine Experiments

Iterative tests across strategic, analytical, and narrative reasoning flows.

Act III — Institutional Validation

(Late 2025 — 2026)

The architecture matured into systems capable of supporting real institutional reasoning.

2025—2026

MAGI v6

The enterprise-grade extension of the architecture, supporting configurable agents, role-based orchestration, and decision audibility.

2025—2026

Edge Multi-Agent Lab

Exploring how MAGI-like reasoning behaves under severe resource constraints, from Raspberry Pi clusters to test low-power orchestration, to MAGI.Lite where lightweight on-device agents handles immediate, low-stakes decisions, while complex or high-uncertainty cases escalate to cloud deliberation.

2026

Institutional Reasoning Deployments

Testing MAGI-like systems inside real workflows for governance, strategy, and operational clarity.

What's Next

(2026 and beyond)

We now focus on institutional-scale reasoning, agentic infrastructures, and augmented decision systems.

MAGI v7 (Concept)

Autonomous evaluators, self-correcting reasoning loops, and modular deliberation ecosystems.

The Institutional Test

Working with partners to validate how transparent reasoning scales across teams, departments, and governance structures.

Start a Conversation

We collaborate with teams designing the next generation of intelligent reasoning systems.

Email Us