Joined as XGIMI’s 7th US employee to rebuild the Americas ERP system and introduce AI-powered automation. Deployed a multi-agent system on OpenClaw — with Claude as the LLM backend and n8n as the workflow layer — that automated financial reconciliation, demand forecasting, and inventory optimization.
XGIMI makes premium home projectors — $800–$2,500 devices sold through Amazon, Best Buy, and direct-to-consumer. The US operation runs lean: seven people managing a $15M+ revenue stream across sales, operations, and finance.
When I joined in July 2024, the ERP system was a patchwork of SAP modules, manual spreadsheets, and tribal knowledge. Financial reconciliation took 3 days per month. Demand forecasting was gut feel. I was brought in to fix the foundation and build the AI layer on top.
I’m the sole PM for both the ERP re-architecture and the AI pipeline. I report to the VP of Operations and work directly with our SAP consultant, a contract data engineer, and the finance team. I also write production code — I built and deployed the agent pipelines myself — writing production code with Claude Code, deployed on OpenClaw.
Mornings are ERP — SAP configuration, data migration, process mapping with Finance and Ops. Afternoons are AI — designing agent architectures, writing prompts, testing reconciliation outputs, and building the n8n automation layer. I ship code alongside specs.
XGIMI’s Americas operation was growing faster than its systems could handle. Financial reconciliation — matching Amazon payouts, Best Buy remittances, DTC orders, and SAP records — took the finance team 3 full days every month. Demand forecasting was a monthly meeting where people guessed. Inventory allocation across 5 locations was managed in a shared Google Sheet.
3-day monthly reconciliation
No automated demand signals
5-location inventory blind spots
I mapped every data flow, identified every manual handoff, and rebuilt the system in layers — ERP foundation first, then AI automation on top.
The system I built on OpenClaw isn’t a dashboard — it’s an autonomous multi-agent pipeline. Four AI agents work in coordination: the reconciliation agent processes financial data nightly, the forecasting agent updates demand signals weekly, the inventory agent monitors stock continuously, and the reporting agent synthesizes everything into executive-ready outputs. Human review is built in at every critical decision point.
We evaluated building custom ML models for reconciliation. Claude won — it handled the messy, unstructured matching (partial payments, split shipments, promotional adjustments) that rule-based systems and fine-tuned models couldn’t parse. And I could iterate on prompts in hours, not retrain models over weeks.
Instead of one large automation, I built four specialized agents that communicate through structured handoffs. Each agent has a clear scope, its own error handling, and can be updated independently. When the reconciliation logic needed to change for a new marketplace, I updated one agent without touching the others.
Every agent output goes through a confidence scoring system. High-confidence actions (>95%) execute automatically. Everything else gets flagged for human review. This isn’t a limitation — it’s the architecture. Finance trusts the system because they know it won’t make decisions it isn’t sure about.
The first version of the reconciliation agent was technically impressive but practically useless. I had optimized for matching accuracy without understanding how the finance team actually worked. They didn’t need perfect matches — they needed the agent to surface the 5% of transactions that were genuinely ambiguous, with enough context to resolve them in under a minute. I rebuilt the output layer to match their workflow: a daily exception report ranked by dollar impact, with one-click resolution paths. Adoption went from skeptical to dependent in two weeks.
Nobody at XGIMI requested an AI system. I had to prove value before I could propose it.
Shadowed Finance for a week. Documented every manual step in reconciliation. Made the cost of the status quo undeniable
Built the first reconciliation agent in 3 days — Claude Code as the dev tool, OpenClaw as the runtime. Ran it against real data and showed Finance the results side-by-side with their manual output
Once reconciliation worked, the forecasting and inventory teams asked to be next. Pull, not push
Built an internal wiki documenting every agent’s logic, inputs, outputs, and failure modes. Made the system transferable, not dependent on me
The goal was never “build an AI system” — it was fix an ERP that couldn’t scale. Once the work broke down into pattern matching and data transformation, agents were the obvious architecture. Shipping it solo with Claude Code compressed months of data engineering into weeks.
I would have involved Finance from day one of the agent build, not after the first version was done. The technical architecture was right, but the output format was wrong because I hadn’t watched them work closely enough. I’d also invest earlier in monitoring and alerting — we had a silent failure on the inventory agent for 48 hours before anyone noticed. The system needs to tell you when it’s broken, not wait for you to check.