Index

Agentic Workflows at IMMO

March 2026

Context

IMMO Capital processes over a billion euros in property transactions monthly across the UK, Germany, and Spain. The core acquisition pipeline — from initial deal sourcing through to investment committee approval — involved a series of manual, judgment-heavy steps that didn’t scale.

This is the story of how we broke down our core evaluation tool into an agentic workflow, what was needed at each stage, and what the system looks like now.

Pipeline overview diagram — placeholder

The Problem

The existing process required analysts to manually pull data from multiple sources — property comparables, regulatory databases, census data, transportation networks — and synthesise it into a structured evaluation. Each deal touched 6-8 different data sources and required cross-referencing against internal investment criteria.

The bottleneck wasn’t any single step. It was the sequential handoffs between them. An analyst would complete document triage, pass it to evaluation, wait for data enrichment, then manually compile the decision brief. Each handoff introduced delay and inconsistency.

Breaking It Down

We decomposed the pipeline into three discrete stages, each suitable for an agentic approach. The key insight was that each stage had a clear input, a well-defined judgment to make, and a structured output — the exact shape of work that agents handle well.

Stage 1: Ingestion

Multi-source document triage. The agent pulls from property listings, regulatory filings, and comparable sales data, extracting structured fields from unstructured inputs. This replaced a process that took analysts 45 minutes per deal.

Stage 2: Evaluation

Data enrichment and scoring. The agent cross-references extracted data against internal investment criteria, census data, and transportation access scores. It produces a preliminary evaluation with confidence indicators for each dimension.

Stage 3: Decision

Decision brief compilation. The agent synthesises the enriched evaluation into a structured brief that matches the format our investment committee expects. It flags areas where confidence is low and human review is needed.

Stage 1: Ingestion

The ingestion agent handles multi-source document triage. When a new property enters the pipeline, it automatically pulls listing data, identifies relevant regulatory filings, and finds comparable transactions within configurable parameters.

The critical design decision was defining what “good enough” extraction looks like. We built evaluation sets for each document type — letting us test extraction accuracy before deploying changes.

Ingestion v1
V1 — Initial extraction pipeline with basic field mapping
Ingestion v2
V2 — Added confidence scoring and multi-source cross-referencing
Ingestion v3
V3 — Final version with evaluation framework integration

Stage 2: Evaluation

The evaluation agent takes structured data from ingestion and runs it through a multi-dimensional scoring framework. Each property is assessed against location quality, yield potential, regulatory risk, and comparable transaction benchmarks.

We designed this as a separate agent rather than extending ingestion because the judgment criteria change independently — the investment team adjusts scoring weights quarterly, and the evaluation logic needed to be testable in isolation.

Evaluation scoring interface — placeholder

Stage 3: Decision

The final agent compiles everything into a decision brief. It’s the most constrained of the three — the output format is rigid because it feeds directly into the investment committee’s review process.

What makes this stage interesting is the confidence flagging. Rather than presenting agent output as authoritative, we designed the brief to explicitly surface where the agent is uncertain and where human judgment is most valuable.

The Workflow Diagram

The complete system connects the three stages with handoff logic that determines when to proceed automatically and when to pause for human review.

Full workflow architecture diagram — placeholder

What Changed

The agentic workflow reduced manual processing time across the pipeline significantly. But the more important outcome was consistency — every deal now goes through the same evaluation rigour regardless of which analyst is on rotation or how busy the team is.

The evaluation framework we built for testing agent accuracy became valuable beyond this project. It gave us a repeatable methodology for validating any agent behaviour before deployment, which we’ve since applied to three other workflows.


This project was built at IMMO Capital across 2024–2026. Details have been generalised where necessary.