When a finance team reviews an annual report, the insight they need is not in the opening letter from the CEO. It is in the second table on page 147 — a specific row, a specific column, a number that determines whether a budget gets approved or a bid gets submitted.
This is true across most document-heavy work. The critical fact is almost never in the prose. It is in the structure: the table, the chart, the comparison matrix, the specification sheet with tolerances in a nested header column.
And yet most AI systems are built around text.
The gap that prose-first AI leaves open
Large language models are trained predominantly on prose. They are good at reading paragraphs, summarising narratives, and answering questions phrased as sentences. That capability is genuinely useful — but it covers only part of what enterprise documents actually contain.
A number without its row and column context is meaningless. A value extracted from a table without knowing which header governs it can be actively misleading. When an AI system ingests a document but flattens its structure in the process, it loses exactly the information that makes the document valuable.
For organisations whose work depends on technical specifications, financial statements, regulatory comparisons, or supplier data sheets, this is not a minor limitation. It is a fundamental gap.
The example below illustrates this directly. The screenshot shows Table 3.6 from the ECB Economic Bulletin — a dense statistical table covering unit labour costs, compensation, and productivity across 12 economic sectors and multiple time periods — as it appears in NotebookLM. This is not a criticism of NotebookLM specifically; most general-purpose AI tools handle structured documents the same way. The limitation is in how documents are parsed before the model ever sees them, not in the model itself:
ECB Table 3.6 as parsed by a typical general-purpose AI tool. The four row-section labels are highlighted in green as reference points — you will see the same labels highlighted in the /Cortex screenshot below. When a table is flattened like this, it makes it hard for the model to analyze the structure.
The row labels are readable. But without the column headers, every number is ambiguous. A figure of 4.5 under "Compensation per employee" could belong to any of 12 sectors. When the column structure is lost at the parsing stage, it's hard for the model to recover it and provide accurate answers.
What Cube5 /Cortex does differently with structured content
/Cortex takes a different approach: it retains the structure of the source document and renders and visualises tables as they actually exist — with column headers, row labels, nested headings, and layout intact.
When a table is processed, the relationship between each value and its context is preserved: what the row represents, what the column means, what unit applies, what condition governs the entry. That structure travels with the data through search, retrieval, and AI reasoning.
Here is the same ECB table in /Cortex — same document, same row-section labels highlighted in green:
The same ECB Table 3.6 in /Cortex. The structure of the source document is retained and rendered as-is: the table title and subtitle are intact, and the four row-section labels (highlighted in green) are anchored in their correct positions. Every number can be read in context — which sector, which metric, which period.
This enables three things that prose-first AI cannot reliably do:
Precise extraction with citation. Every value /Cortex surfaces is traceable — not just to the document, but to the specific table, row, and cell it came from. Teams can validate the output without hunting through the source.
Cross-document comparison. /Cortex can map a figure from one document against the corresponding entry in another: a line in a financial statement matched to a budget row, a supplier specification checked against an internal threshold, an RFP requirement mapped to a product capability in a data sheet. The comparison is structured, not approximate.
Consistent results across users. When a workflow depends on extracting specific values from documents, the result should not vary based on how a question is phrased or who asks it. /Cortex delivers repeatable, governed extraction — not a probabilistic summary.
What this looks like in practice
A finance team reviewing multiple annual reports needs to compare revenue growth, margin trends, and segment performance across companies and periods. Rather than reading each document sequentially and building a manual comparison, /Cortex extracts the relevant rows from each statement, maps them to a consistent structure, and links every figure to its source cell.
An engineering or procurement team validating a new supplier sends a set of technical data sheets through /Cortex. It compares spec values against internal requirements, flags values outside tolerance, and identifies any fields where the supplier documentation is incomplete — each finding linked to the exact entry in the original document.
A pre-sales or bid team responding to an RFP uses /Cortex to map line-item requirements to product capability data from their own specification library. Structured question, structured answer, source cited.
In each case, the document is not a thing to be read — it is a surface to be worked with. The AI operates on the structure, not around it.
The bigger point
There is a version of enterprise AI that handles prose well and treats tables as an afterthought. That version solves the easy part of the problem.
In most of the documents that govern real business decisions — contracts, specifications, financial statements, compliance reports, supplier comparisons — the valuable facts are in the structured content. Getting AI to production in these environments means being able to work with all of it.
/Cortex is designed for that reality.
If your documents are dense with tables and structured data, we are happy to show you how /Cortex handles them.