Dialogue of AI is throughout us, however in my expertise, sensible steerage rooted in particular use instances is surprisingly uncommon. After spending months deep within the weeds of an enormous documentation migration with AI as my assistant, I’ve realized some hard-won classes that I feel others may gain advantage from.
For those who work in content material engineering, technical documentation, or are merely interested by how AI holds up in a posh, real-world mission, right here’s my tackle what labored and what didn’t.
Mission Context
I’m a DITA Info Architect on the Info Expertise workforce at Splunk. DITA, quick for Darwin Info Typing Structure, is an open, XML-based customary for structuring and managing technical content material.
We just lately wrapped up the migration of three giant documentation websites right into a single assist portal, powered by a DITA-based part content material administration system (CCMS). The timeline was tight, and almost all the sources had been inner. The migrations had been advanced and important to the enterprise, requiring cautious planning and execution.
I initially deliberate solely to help the migration of the smaller, unversioned web site. When that went properly, I used to be requested to steer the a lot bigger second migration. (The third web site was dealt with by one other workforce.) Collectively, these two migrations meant grappling with roughly 30,000 HTML information, two very totally different web site architectures, and the problem of customizing an present Python migration script to suit the content material at hand, whereas additionally placing processes in place for writers to evaluation and clear up their content material.
I wish to be clear that AI didn’t full this mission for me. It enabled me to work sooner and extra effectively, although solely whereas I did the planning, architecting, and troubleshooting. Used successfully, AI turned an influence instrument that dramatically sped up supply, nevertheless it by no means changed the necessity for experience or oversight.
All through this mission, I used the then-current GPT-4 fashions via an inner Cisco chat-based deployment. Today, I work extra in editor-based instruments similar to GitHub Copilot. Nonetheless, the teachings I realized ought to apply to the current (mid-2025) state-of-the-art, with just a few caveats that I point out the place related.
How I used AI successfully
Prompting
One lesson I realized early on was to deal with prompts the way in which I method technical documentation: clear, constant, and complete. Earlier than consulting the AI, I’d sketch out what wanted to occur, then break it down into granular steps and write a immediate that left as little to the creativeness as doable.
If I wasn’t positive in regards to the answer, I’d use the AI as a brainstorming companion first, then comply with up with a exact immediate for implementation.
Iterative improvement
The migration automation wasn’t a single script however turned a collection of Python instruments that crawl navigation timber, fetch HTML, convert to DITA XML, break up matters into smaller models, map content material, and deal with model diffs. Every script began small, then grew as I layered in options.
I shortly realized that asking AI to rewrite a big script was a recipe for bugs and confusion. As an alternative, I added performance in small, well-defined increments. Every characteristic or repair acquired its personal immediate and its personal GitLab commit. This made it simple to roll again when one thing went sideways and to trace precisely what every change achieved.
Debugging
Even with good prompts, AI-generated code not often labored completely on the primary strive – particularly because the scripts grew in measurement. My only debugging instrument was print statements. When the output wasn’t what I anticipated, I’d sprinkle print statements all through the logic to hint what was occurring. Typically I’d ask AI to re-explain the code line by line, which frequently revealed delicate logical errors or edge instances I hadn’t thought-about.
Importantly, this wasn’t nearly fixing bugs, it was additionally about studying. My Python expertise grew immensely via this course of, as I pressured myself to essentially perceive each line the AI generated. If I didn’t, I’d inevitably pay the worth later when a small tweak broke one thing downstream.
Today, I lean on an AI-powered built-in improvement setting (IDE) to speed up debugging. However the precept is unchanged: don’t skip instrumentation and verification. If the AI can’t debug for you, fall again on print statements and your individual means to hint the issue to its supply. And at all times double verify any AI-generated code.
AI as an implementer, not inventor
This mission taught me that AI is improbable at taking a well-defined thought and turning it into working code. However should you ask it to design an structure or invent a migration technique from scratch, it should most likely allow you to down. My best workflow was to (1) design the method myself, (2) describe it intimately, (3) let the AI deal with the implementation and boilerplate, and (4) evaluation, check, and refine the AI output.
Model management
I can’t stress sufficient the significance of model management, even for easy scripts. Each time I added a characteristic or mounted a bug, I made a commit. When a bug appeared days later, I might stroll again via my historical past and pinpoint the place issues broke. Certain, that is primary software program engineering, however once you’re working with AI, it’s much more essential. The speed of change will increase, and your individual reminiscence of every modification is inevitably much less exhaustive.
The web impact of those practices was velocity with out chaos. We delivered far sooner than we might have in any other case, and the standard of the output considerably diminished post-migration cleanup.
The place AI fell quick
As worthwhile as AI was, it had many shortcomings. The cracks began to indicate because the scripts grew in measurement and complexity:
- Context limits: When scripts acquired longer, the AI misplaced monitor of earlier code sections. It might add new standalone options, however integrating new logic into present, interdependent code? That usually failed except I spelled out precisely the place and make adjustments. I ought to notice that at present’s newer fashions with bigger context home windows would possibly scale back among the points I bumped into with the migration scripts. However I believe that it’s nonetheless necessary to be as particular as doable about what sections have to be up to date and with what logic.
- Failure to discover a working implementation: I discovered that typically the AI merely couldn’t remedy the issue as outlined within the immediate. If I requested for a change and it failed three or 4 instances, that was often a sign to step again and take a look at one thing totally different – whether or not that meant prompting for an alternate method or writing the code myself.
- System understanding: Sure bugs or edge instances required a strong understanding of our programs, like how the CCMS handles ID values, or how competing case sensitivity guidelines throughout programs might journey issues up. This can be a essential space the place AI couldn’t assist me.
What I’d do otherwise subsequent time
Right here’s my recommendation, if I needed to do it yet again:
- Plan core libraries and conventions early: Determine in your stack, naming schemes, and file construction on the outset and embrace them in each immediate. Inconsistencies right here led to time wasted refactoring scripts midstream. That stated, working in an editor-based instrument that’s conscious of your total pipeline will assist to maintain your libraries constant from the outset.
- Sanitize all the pieces: File names, IDs, casing, and different seemingly minor particulars could cause main downstream issues. Embody this steerage in your prompting boilerplate.
- Account for customized content material: Don’t assume all docs comply with the identical patterns and undoubtedly don’t assume the AI understands the nuances of your content material. Discover out early the place the outliers are. This upfront work will prevent time in the long term.
- Doc the advanced stuff: For any logic that takes various minutes to know, write down an intensive rationalization you possibly can refer again to later. There have been instances I needed to re-analyze difficult components of the scripts weeks later, when an in depth notice would have set me again on target.
One non-AI tip: hold copies of your supply and transformed markup in a repository even after importing the transformed content material to your manufacturing tooling. I promise that you simply’ll must refer again to them.
AI as a companion, not a alternative
Reflecting on the mission, I can emphatically say that AI didn’t exchange my essential considering. As an alternative, it amplified my expertise, serving to me work at a velocity and scale that will have been troublesome to attain alone, whereas streamlining the post-migration cleanup. However anytime I leaned too closely on AI with out cautious planning, I wasted time and needed to backtrack.
The actual worth got here from pairing my area information and important considering with AI’s means to iterate shortly and implement. Used thoughtfully, AI helped me ship a mission that turned a profession milestone.
For those who’re dealing with your individual daunting migration, or simply wish to get extra out of AI in your workflow, I hope these classes prevent some ache, and possibly even encourage you to tackle a problem you might need thought was too massive to sort out.
Discover extra tales on our Innovation channel and subscribe right here!