Every engineer has been there. You paste a vague question into ChatGPT, Claude, or Copilot and get a vague answer back. The problem is not the model. The problem is how you described the issue.
Dr. Adam Rodman from Harvard Medical School developed a framework called TLICC to help patients describe symptoms to doctors more effectively. The insight is simple: structured descriptions lead to better diagnoses. And this principle transfers directly to how we interact with AI systems.
What Is TLICC
TLICC is an acronym for five dimensions that fully characterize any problem:
- T — Time: When did it start? How long has it been happening?
- L — Location: Where exactly is the problem? Which system, service, or component?
- I — Intensity: How severe is it? Is it a minor annoyance or a critical blocker?
- C — Context: What was happening when it began? What changed recently?
- C — Change: Has the situation gotten better or worse over time?
In medicine, these five questions give a doctor most of what they need to form a differential diagnosis. In engineering and AI interactions, they give the model the structured context it needs to generate genuinely useful responses.
Why TLICC Works for AI Prompts
Large language models are pattern-matching engines. The more structured and specific your input, the more targeted the output. TLICC forces you to move from “it’s broken” to a precise, multi-dimensional description of the problem.
Consider the difference:
Without TLICC:
“My Kubernetes pods keep crashing.”
With TLICC:
“Since yesterday morning (Time), our payment-service pods in the production cluster on GKE (Location) are crash-looping every 3-5 minutes with OOMKilled status (Intensity). We deployed a new container image with updated dependencies on Monday (Context), and the frequency has increased from hourly to every few minutes over the past 24 hours (Change).”
The second prompt gives any AI model — or any human colleague — enough information to immediately suggest checking memory limits, reviewing the dependency changes, and comparing resource usage before and after the deployment.
TLICC Applied to Common Engineering Scenarios
Infrastructure Troubleshooting
- Time: “Started at 14:32 UTC, right after the terraform apply completed”
- Location: “Affects the eu-west-1 region only, specifically the API gateway”
- Intensity: “50% of requests are returning 502 errors”
- Context: “We added a new WAF rule and increased the instance count from 3 to 5”
- Change: “Error rate climbed from 10% to 50% over the past hour”
Performance Optimization
- Time: “Build times increased two sprints ago”
- Location: “CI/CD pipeline, specifically the integration test stage”
- Intensity: “Pipeline went from 8 minutes to 35 minutes”
- Context: “We added end-to-end browser tests and migrated from Jenkins to GitHub Actions”
- Change: “Steadily getting worse as we add more test cases, roughly 2 minutes per week”
Security Incident Response
- Time: “Alert triggered at 03:15 UTC this morning”
- Location: “Auth service logs show requests from IPs in ranges we do not recognize”
- Intensity: “200+ failed login attempts per minute against the admin API”
- Context: “We published a blog post yesterday that mentioned our API endpoint structure”
- Change: “Attack volume doubled after our post was shared on Hacker News”
TLICC as a Prompt Engineering Pattern
You can formalize TLICC into a reusable prompt template:
I need help with [brief description].
Time: [when did this start / how long has it been happening]
Location: [which system, service, environment, or component]
Intensity: [severity, impact, frequency]
Context: [what was happening when it began, recent changes]
Change: [is it getting better, worse, or staying the same]This works with any AI model. It works with ChatGPT, Claude, Copilot, Gemini, or open-source models running on RHEL AI. The framework is model-agnostic because it structures the input, not the model.
Beyond Troubleshooting
TLICC is not limited to debugging. You can apply it to:
- Architecture decisions: When did the need arise? Which services are affected? How urgent is it? What constraints exist? How have requirements evolved?
- Code reviews: When was this code written? Which module? How complex is the change? What feature triggered it? How has the approach changed during development?
- Vendor evaluations: When do we need a solution? Which team or workload? How critical is it? What are we using today? How have our requirements shifted?
The pattern works because it forces completeness. Most poor AI interactions fail because the user omitted one or more of these dimensions.
The Medical Parallel
Dr. Rodman’s original insight is worth understanding. In clinical medicine, patients often describe symptoms in unstructured, emotional terms. Doctors are trained to extract the TLICC dimensions through questioning. The framework makes the patient a better collaborator in their own diagnosis.
The same dynamic exists in AI interactions. You are the patient describing symptoms to the model. The more structured your description, the less the model has to guess — and the fewer hallucinations or generic responses you get back.
This is particularly relevant as AI moves into healthcare applications. Teams building clinical AI systems on platforms like Kubernetes need to understand how structured input dramatically improves output quality. The same principle applies whether you are building an Ansible automation pipeline or a diagnostic assistant.
Practical Tips
Write TLICC before you prompt. Take 30 seconds to fill in all five dimensions. You will often realize you are missing critical context.
Use TLICC for team communication too. Slack messages, incident reports, and support tickets all benefit from the same structure.
Teach TLICC to your team. When everyone describes problems the same way, both AI tools and human colleagues become more effective.
Iterate with TLICC. If the AI response is not helpful, check which TLICC dimension you under-specified and add more detail.
Combine with role prompting. Start with “You are a senior SRE” and then provide TLICC-structured context for the best results.
Final Thoughts
The best prompt engineering techniques often come from outside tech. TLICC is a medical diagnostic framework that happens to be one of the most effective ways to structure AI interactions. It is simple, memorable, and universally applicable.
Next time you are about to paste a one-liner into your AI assistant, pause and ask yourself: did I cover Time, Location, Intensity, Context, and Change? Those 30 extra seconds of structured thinking will save you minutes of back-and-forth.
The framework reminds us that AI is not magic. It is a collaborator that performs best when given structured, complete information — exactly like a good doctor needs from a patient.

