How I used context engineering to build a series of custom GPTs for a health tech brand
- Mike Simone
- Jul 2
- 4 min read

In early 2025, I was tasked with solving a challenge: Can we use AI to scale our content workflows without losing our voice, accuracy, or compliance?
The company is growing fast, so the content demands were multiplying. And like most teams exploring AI, we were stuck between two extremes: tools that were too generic to be useful, and ones that were too risky to trust.
What we needed wasn’t just a smarter chatbot — we needed a system. And that’s where context engineering came in.
Over the next few months, I built a series of custom GPTs from the ground up, designed specifically for a health tech company operating in a tightly regulated space. It wasn’t just prompt tuning. It was architecture:
A 50+ page brand writing system
A curated internal content library
Role-specific GPTs, each trained to speak in the company’s voice
Guardrails for compliance, clarity, and creativity
The result was a private, in-house AI tool that’s now being piloted by leadership to support both marketing and medical writing teams — without compromising on quality or brand integrity.
Here’s how I did it — and what you can take away if you're trying to build something similar.
What is context engineering?
Most people think the magic of AI comes from the prompt, but the real magic happens before the prompt is even written.
Context engineering is the practice of structuring the information, parameters, and environment that an AI model draws from — so it can consistently generate output that’s useful, trustworthy, and aligned with your goals.
It’s not just about what you say to the model. It’s about what the model already knows, and how it knows it.
In this project, context engineering meant:
Designing a reference system: voice, tone, brand principles, writing examples, and common phrases
Curating an internal content library the model could draw from without hallucinating
Creating role-specific guidance for how each GPT should respond — whether the user was a marketer or a medical writer.
Building guardrails around what the model shouldn’t say, to preserve compliance and reduce risk
If prompt engineering is what you say to the tool, context engineering is everything that surrounds it — the map, the rules, and the compass.
And when it’s done right, the output doesn’t just sound good.It sounds like you.
The challenge
The company I built this for sits at the intersection of healthcare, technology, and consumer education. That means every piece of content from an Instagram caption to white paper needed to strike a delicate balance:
Clear but not simplistic
Confident but not overpromising
Approachable but still medically credible
And with multiple teams producing content including marketing, medical, partnerships, and leadership, the risk of voice drift was real. Everyone had a different interpretation of what “on-brand” meant. Edits were subjective. Bottlenecks were constant. And quality control depended entirely on a small group of expert reviewers that were sure to burn out.
At the same time, like every forward-looking company, leadership wanted to explore AI. But most of the tools on the market either:
Spit out generic copy that didn’t sound like us
Required extensive editing to be usable
Or raised red flags around brand risk, hallucination, or compliance
The question wasn’t “should we use AI?”It was: how do we use AI in a way that actually makes things better — not just faster or cheaper?
We didn’t need automation for automation’s sake.We needed a system that could scale good judgment.
The solution
To solve for scale without sacrificing nuance, I built a custom GPT ecosystem anchored in a brand-specific writing system.
The solution included four core components:
1. A 50+ page brand writing guide
This wasn’t just a style guide. It was a comprehensive playbook that defined our voice across different tones, channels, and scenarios with real examples, not just principles. It included:
Key product information
Dozens of “approved” phrases
Side-by-side comparisons of right vs. wrong tone
Channel-specific adaptations (e.g., how the same concept sounds on social vs. email vs. blog)
Guidance on writing about medical content for non-medical readers
2. A curated internal library
I indexed some of the company’s strongest existing content from marketing campaigns, thought leadership pieces, and FAQs and turned that into structured training material. This gave the GPT real-world reference points to emulate.
3. Role-specific custom GPTs
Instead of one tool, I created multiple GPTs, each scoped to a particular role or use case:
One for general brand and marketing copy
One for clinical/scientific writing
One for social media and email writing
Each was trained on different slices of the system, with tailored instructions and reference materials.
4. Built-in guardrails
I embedded constraints directly into the GPTs including what not to say, how to cite or attribute sources, and when to prompt for human review. The tools were designed as co-pilots, not autonomous writers.
The result wasn’t just a working AI tool. It was an operational system that supported consistency, creativity, and compliance all at once.
Where things are headed…
There’s a lot of hype around AI. And a lot of fear, too. But most of the conversation misses the real point:
AI is only as useful as the system behind it.
If you feed it basic stuff, it gives you basic stuff back. If you feed it context, it gives you leverage.
Context engineering isn’t a buzzword. It’s the bridge between generic tools and purpose-built intelligence.
Need content strategy help? Book a session with me.
Comments