This weblog was written in collaboration with Fan Bu, Jason Mackay, Borya Sobolev, Dev Khanolkar, Ali Dabir, Puneet Kamal, Li Zhang, and Lei Jin.
“All the things is a file”; some are databases


Introduction
Machine knowledge underpins observability and prognosis in fashionable computing methods, together with logs, metrics, telemetry traces, configuration snapshots, and API response payloads. In apply, this knowledge is embedded into prompts to type an interleaved composition of natural-language directions and huge machine-generated payloads, sometimes represented as JSON blobs or Python/AST literals. Whereas massive language fashions excel at reasoning textual content and code, they ceaselessly battle with machine-generated sequences – significantly when these are lengthy, deeply nested, and dominated by repetitive construction.
We repeatedly observe three failure modes:
- Token explosion from verbosity: Nested keys and repeated schema dominate the context window, fragmenting the info.
- Context rot: The mannequin misses the “needle” hidden inside massive payloads and drifts from the instruction.
- Weak spot on numeric/categorical sequence reasoning: Lengthy sequences obscure patterns akin to anomalies, developments, and entity relationships.The bottleneck isn’t merely concerning the size of the inputs. Machine knowledge as an alternative requires structural transformation and sign enhancement in order that the identical data is offered in representations aligned with a mannequin’s strengths.
“All the things is a file”; some are databases
Anthropic efficiently popularized the notion that “bash is all you want” for agentic workflows, particularly for vibe coding, by absolutely leveraging the file system and composable bash instruments. In machine-data-heavy settings of context engineering, we argue that ideas from database administration apply: fairly than forcing the mannequin to course of uncooked blobs immediately, full-fidelity payloads could possibly be saved in a datastore, permitting the agent to question them and generate optimized hybrid knowledge views that align with the LLM’s reasoning strengths utilizing a subset of straightforward SQL statements.
Hybrid knowledge views for machine knowledge – “easy SQL is what you want”
These hybrid views are impressed by the database idea of hybrid transactional/analytical processing (HTAP), the place totally different knowledge layouts serve totally different workloads. Equally, we keep hybrid representations of the identical payload in order that totally different parts of the info will be extra successfully understood by the LLM.
To this finish, we introduce ACE (Analytics Context Engineering) for machine knowledge—a framework for developing and managing analytics context for LLMs. ACE combines a digital file system (mapping observability APIs to recordsdata and transparently intercepting Bash instruments to keep away from unscalable MCP calls) with the simplicity of Bash for intuitive, high-level group, whereas incorporating database-style administration strategies to allow exact, fine-grained management over low-level knowledge entries.
Deep Community Mannequin – ACE
ACE is utilized in Cisco AI Canvas runbook reasoning. It converts uncooked prompts and machine payloads into hybrid views in instruction-preserving contexts that LLMs can reliably devour. ACE was initially designed to reinforce the Deep Community Mannequin (DNM), a Cisco purpose-built LLM for networking domains. To assist a broader vary of LLM fashions, ACE was subsequently carried out as a standalone service.
At a excessive degree:
- A preprocessor parses the consumer immediate—comprising pure language and embedded JSON/AST blobs as a single string—and produces hybrid knowledge views together with non-obligatory language summaries (e.g., statistics or anomaly traces), all inside a specified token finances.
- A datastore retains a full-fidelity copy of the unique machine knowledge. This permits the LLM context to stay small whereas nonetheless enabling full solutions.
- A processor for-loop inspects the LLM output and conditionally queries the datastore to counterpoint the response, producing a whole, structured ultimate response.
Row-oriented + Columnar views
We generate complementary representations of the identical payload:
- Columnar view (field-centric). For analytics duties (e.g., line/bar chart, pattern, sample, anomaly detection), we rework nested JSON into flattened dotted paths and per-field sequences. This eliminates repeated prefixes, makes associated knowledge contiguous, and eases the computation per subject.
- Row-oriented view (entry-centric). To assist relationship reasoning — akin to has-a and is-a relationships, together with entity membership and affiliation mining — we offer a row-oriented illustration that preserves file boundaries and native context throughout fields. As a result of this view doesn’t impose an inherent ordering throughout rows, it naturally permits the appliance of statistical strategies to rank entries by relevance. Particularly, we design a modified TF-IDF algorithm, based mostly on question relevance, phrase recognition, and variety, to rank rows.
Rendering format: We offer a number of codecs for rendering content material. The default format stays JSON; though it isn’t all the time probably the most token-efficient illustration, our expertise exhibits that it tends to work finest with most present LLMs. As well as, we provide a custom-made rendering format impressed by the open-source TOON venture and Markdown, with a number of key variations. Relying on the schema’s nesting construction, knowledge are rendered both as compact flat lists with dotted key paths or utilizing an indented illustration. Each approaches assist the mannequin infer structural relationships extra successfully.
The idea of a hybrid view is effectively established in database methods, significantly within the distinction between row-oriented and column-oriented storage, the place totally different knowledge layouts are optimized for various workloads. Algorithmically, we assemble a parsing tree for every JSON/AST literal blob and traverse the tree to selectively rework nodes utilizing an opinionated algorithm that determines whether or not every part is best represented in a row-oriented or columnar view, whereas preserving instruction constancy below strict token constraints.
Design precept
- ACE follows a precept of simplicity, favoring a small set of generic instruments. It embeds analytics immediately into the LLM’s iterative reasoning-and-execution loop, utilizing a restricted subset of SQL along with Bash instruments over a digital file system because the native mechanisms for knowledge administration and analytics.
- ACE prioritizes context-window optimization, maximizing the LLM’s reasoning capability inside bounded prompts whereas sustaining a whole copy of the info in an exterior datastore for query-based entry. Fastidiously designed operators are utilized to columnar views, whereas rating strategies are utilized to row-oriented views.
In manufacturing, this method drastically reduces immediate measurement, price, and inference latency whereas enhancing reply high quality.
Illustrative examples
We consider token utilization and reply high quality (measured by an LLM-as-a-judge reasoning rating) throughout consultant real-world workloads. Every workload contains unbiased duties comparable to particular person steps in a troubleshooting workflow. As a result of our analysis focuses on single-step efficiency, we don’t embrace full agentic prognosis trajectories with instrument calls. Past considerably decreasing token utilization, ACE additionally achieves larger reply accuracy.
1. Slot filling:
Community runbook prompts mix directions with JSON-encoded board and chat state, prior variables, instrument schemas, and consumer intent. The duty is to floor a handful of fields buried in dense, repetitive machine payloads.


Our method reduces the typical token rely from 5,025 to 2,350 and corrects 42 errors (out of 500 exams) in comparison with immediately calling GPT-4.1.
2. Anomalous behaviors:
The duty is to deal with a broad spectrum of machine knowledge evaluation duties in observability workflows.


By making use of anomaly detection operators to columnar views to supply further contextual data, our method will increase the typical reply high quality rating from 3.22 to 4.03 (out of 5.00), a 25% improve of accuracy, whereas attaining a 44% discount in token utilization throughout 797 samples.
3. Line chart:
The enter sometimes consists of time-series metrics knowledge which are arrays of measurement data collected at common intervals. The duty is to render this knowledge utilizing frontend charting libraries.


Straight calling the LLM usually leads to incomplete knowledge rendering because of lengthy output sequences, even when the enter matches inside the context window. Within the determine above, LLM produces a line chart with solely 40-120 factors per collection as an alternative of the anticipated 778, resulting in lacking knowledge factors. Throughout 100 check samples, as proven within the following two figures, our method achieves roughly 87% token financial savings, reduces common end-to-end latency from 47.8 s to eight.9 s, and improves the reply high quality rating (similarity_overall) from 0.410 to 0.786 (out of 1.00).


4. Benchmark abstract:
Along with the three examples mentioned above, we examine key efficiency metrics throughout a variety of networking-related duties within the following desk.


Observations: In depth testing throughout a variety of benchmarks demonstrates that ACE reduces token utilization by 20–90% relying on the duty, whereas sustaining and in lots of instances enhancing reply accuracy. In apply, this successfully delivers an “limitless” context window for prompts involving machine knowledge.
The above analysis covers solely particular person steps inside an agentic workflow. Design ideas grounded in a digital file system and database administration allow ACE to work together with the LLM’s reasoning course of by extracting salient alerts from the huge quantity of observability knowledge by multi-turn interactions.

