top of page

How To Use AI Like A Strategist: Prompt. Research. Build.




Last week at Louisville AI Week, one insight stopped the room.

The conversation shifted away from which AI tool to use — and landed somewhere more important:


What problem are you actually trying to solve?


It sounds obvious. It isn't.

And even after you've named the problem, most people still make the same mistake.


They open one tool. Type a question. Accept the output. Move on.

That's not a strategy. That's a shortcut — and the risks it creates are quietly compounding inside organizations right now.



The Three Risks Nobody Is Talking About


Before I share the method I use, it's worth naming what single-tool AI use actually costs you.


1. Monolithic Thinking


When you rely on one AI tool, your thinking takes one shape.

One worldview. One set of training priorities. One governance philosophy. One model's version of what a good answer looks like — applied to every question you ask, every document you produce, every decision you inform.


That's not intelligence. That's a monolith dressed up as productivity.


Here's what most people don't realize: every major AI tool is built on a fundamentally different foundation. ChatGPT is optimized for broad utility and scale. Claude is built around safety-first alignment. Gemini is shaped by Google's search infrastructure. Grok is designed for speed and cultural immediacy. Copilot is embedded in Microsoft's organizational ecosystem.


Each reflects the governance philosophy of the company that built it. Each carries different risk tolerances, different safety layers, different default assumptions about what a good output looks like.


When you use one tool exclusively, you don't just get one set of answers. You get one set of blind spots — inherited invisibly, reproduced consistently, and scaled across everything your organization produces.


The monolith doesn't announce itself. That's what makes it dangerous.


2. AI Slop


You've seen it. Probably produced it without realizing.

Generic outputs. Bland structure. Prose that sounds polished but says nothing specific. Content that reads like it could have been written by anyone — because in a sense, it was.


AI slop is what happens when the input isn't intentional. When you open a tool, type a rough question, and accept whatever comes back first.


Multiply that across an organization and you get a homogenized, undifferentiated body of work that erodes credibility, trust, and competitive differentiation — fast.

The antidote isn't a better tool. It's a better question. And most people are skipping the step that produces better questions entirely.


3. Embedded Bias


Every major AI model contains bias. Not as a flaw — as a structural reality.

Bias lives in the training data. In the reinforcement tuning. In the safety layers. In the governance philosophy of the organization that built the model.


When you use one tool exclusively, you inherit its bias without knowing it. It shapes what gets emphasized, what gets softened, what gets reframed, and what gets left out entirely. And because the output looks confident and coherent, the bias is invisible.


This is not hypothetical. It is happening inside organizations right now — quietly, at scale, in strategy documents, client deliverables, and executive communications.

These three risks are the reason the method I'm about to share exists.



The Method: Prompt. Research. Build.


I use three categories of AI tools, in a specific sequence, for a specific reason.

Not to go faster.

To go deeper.


Step 1 — Prompt Design the question before you ask it.


Before I do any research, I open ChatGPT or Claude and share the outcome I am trying to achieve. Not a question — an outcome. What do I need to know? What decision am I making? What does a genuinely useful answer look like?

Then I ask it to write the research prompt for me.


This is the step most people skip entirely. And it is the step that changes everything downstream.


A poorly constructed question going into a research tool produces a poorly constructed answer — regardless of how sophisticated the tool is. You cannot get strategic output from a tactical input.


Designing the prompt first pressure-tests your thinking before it costs you time, money, or credibility.



Step 1 — The Difference Two Minutes Makes


Let's say you are preparing a market entry brief on AI adoption trends in the healthcare sector.


Most people open Perplexity and type something like:

"AI trends in healthcare 2025"

That's a tactical input. What comes back is broad, generic, and requires significant sorting to find anything useful.


Here's what I do instead.

I open Claude or ChatGPT first and type this:

"I am preparing a market entry brief for a workforce solutions company exploring AI adoption in the healthcare sector. My audience is C-suite executives making budget decisions in Q2 2025. I need to understand where AI is being adopted fastest, what the primary barriers are, which roles are most affected, and what the regulatory landscape looks like. Can you write me a precise research prompt I can take into Perplexity to get the most relevant, cited, current information on this?"


Claude/ChatGPT comes back with something like this:

"Search for: AI adoption barriers and workforce impact in US healthcare systems 2024-2025, including regulatory compliance challenges, clinical vs administrative use cases, nurse and physician role disruption, CMS and HIPAA AI governance updates, and enterprise budget allocation trends for health system AI integration."


Now look at the difference.

The first query would return general trend articles. The second returns specific, decision-relevant intelligence — barriers, roles, regulation, budget patterns — exactly what the brief needs.


Same tool. Same platform. Completely different output quality.

Because the question was designed before it was asked.

That is the prompt layer in action.

One input to Claude or ChatGPT. Two minutes of clarity work. Everything downstream is sharper because of it.



Step 2 — Research Find, verify, source.


I take that designed prompt into Perplexity.


Perplexity is built differently from generative tools. It doesn't create — it finds, synthesizes, and attributes. Every output comes with cited, clickable sources I can verify before I build anything on top of them.


That matters enormously when accuracy is non-negotiable — when the output is going to a client, a board, or a decision that carries real organizational stakes.


Because I started with a well-constructed prompt, the research outputs are sharper, more targeted, and more credible than anything a half-formed question would produce.



Step 2 — From Designed Prompt To Cited Intelligence


You now have your designed prompt. You take it into Perplexity.


You paste in exactly what Claude/ChatGPT wrote:

"AI adoption barriers and workforce impact in US healthcare systems 2024-2025, including regulatory compliance challenges, clinical vs administrative use cases, nurse and physician role disruption, CMS and HIPAA AI governance updates, and enterprise budget allocation trends for health system AI integration."

Perplexity returns a synthesized response — but here's what makes it different from every other tool.


Every claim comes with a source. Clickable. Verifiable. Dated.


You can see that a statistic about AI administrative adoption came from a JAMA study published in October 2024. You can see that the regulatory update came from a CMS policy brief. You can see that the workforce displacement data came from a Deloitte healthcare report.


That matters for three reasons:

First, you can verify before you build. You are not writing a board brief on information a model invented confidently. You are building on sources you can stand behind.


Second, your citations are already done. When the executive asks where the data came from, you have an answer.


Third, you can spot the gaps. If Perplexity returns strong data on clinical AI adoption but thin data on workforce impact specifically — that's a signal. Either the research doesn't exist yet, or you need a more targeted follow-up prompt. Either way, you know before you build — not after.


What you now have going into Step 3:

  • Verified market data on AI adoption rates in healthcare

  • Cited regulatory landscape summary

  • Sourced workforce impact findings by role

  • Budget allocation trends from credible industry sources

  • A clear map of where the evidence is strong and where the gaps are

You didn't generate any of this. You found it, verified it, and now you own it.


That's the research layer.



Step 3 — Build Create, refine, triangulate.

I take the verified research back into Claude or ChatGPT — and here is where it gets interesting.


I often run both simultaneously with the same input.

Same data. Same brief. Two models. Two outputs.

Then I compare.


Where they agree, I have confidence. Where they diverge, I have a signal. That divergence is valuable — it tells me something needs closer examination, a sharper angle, or a more precise prompt.


This is AI peer review. I am not asking one model to be right. I am asking two models built on different philosophies to stress-test the same material — and watching where the cracks appear.


I toggle between the three tools — Perplexity, Claude, ChatGPT — in a live loop until the output is not just good enough. It's right.The Complete Workflow In One ViewThe Complete Workflow In One View



Step 3 —Two Models. Same Data. Watch What Happens.

You now have verified, cited research. You take it back into Claude and ChatGPT — simultaneously.


You open both and paste in the same brief:

"I am building a market entry brief for C-suite healthcare executives on AI adoption trends. Here is the verified research I've gathered. [paste Perplexity output] My objective is a tight executive brief — no more than four pages — that covers the opportunity, the barriers, the workforce implications, and a recommended entry positioning for a workforce solutions company. Please draft the brief structure and opening argument."


Claude comes back with something measured and precise. It builds a careful argument. It acknowledges complexity. It flags where the evidence is strong and where it is speculative. The prose is clean and the framing is nuanced — appropriate for a board-level audience that will scrutinize every claim.


ChatGPT comes back with something more structured and decisive. It produces a tighter framework. Clear headers. Punchy executive summary. Stronger calls to action. It moves faster toward a recommendation.


Now you compare.

On the market opportunity section — they largely agree. High confidence. Use it.

On the regulatory risk section — they diverge. Claude flags a specific HIPAA compliance ambiguity that ChatGPT glosses over. That divergence is a signal. You go back to Perplexity with a targeted follow-up prompt on that specific regulatory question. You get the answer. You return to the build layer with the gap filled.


On the positioning recommendation — they take different angles. Claude recommends leading with risk mitigation as the entry narrative. ChatGPT recommends leading with efficiency gains. Both are defensible. But now you have a genuine strategic choice in front of you — not a default — and you can make it intentionally based on what you know about your specific audience.


That is AI peer review in action.

You are not asking one model to be right. You are asking two models — built on different philosophies, governed by different principles, trained on different priorities — to pressure-test the same material.


Where they agree, you have confidence. Where they diverge, you have intelligence.




The Complete Workflow In One View



Total time investment in the method itself: 20 to 30 minutes before you write a single word of the final deliverable.


What you avoid: Building a strategic brief on unverified data, through one model's lens, with a question that was never designed in the first place.


What you produce: A deliverable you can defend. In a room full of executives. With sources. Built on intentional thinking, not default behavior.

That is the difference between using AI and using AI like a strategist.



What This Solves — And Why It Matters

Let me connect this back to the three risks directly.


Against monopolistic thinking: Running two generative models simultaneously breaks single-model dependency. When Claude and ChatGPT diverge on the same input, you have found a blind spot. That's not a problem — that's intelligence.


Against AI slop: Designing the prompt before you research forces clarity on the outcome first. Intentional inputs produce differentiated outputs. Generic inputs produce generic outputs. The prompt layer is the quality control most workflows skip.


Against embedded bias: Triangulating across three tools — each built differently, governed differently, trained differently — surfaces where bias might be shaping the answer. No single model's assumptions go unchecked.

Three tools. One truth.




The Bigger Picture


At Louisville AI Week, what I heard underneath all the conversation about tools and models was something more fundamental.


AI is maturing. And as it matures, the competitive advantage shifts.

It moves away from access — everyone has access now. The tools are widely available. The subscriptions are affordable. The barrier to entry is effectively zero.

The advantage moves toward how you use it.


The leaders who will win with AI over the next three years are not the ones with the most tools. They are the ones who have built the most intentional workflows — who understand the risks of single-tool dependency, who design their questions before they ask them, and who use AI to go deeper, not just faster.



Where Most Organizations Actually Are


In my work with leaders and organizations through Future Workforce Systems, I see three levels of AI maturity playing out right now:


Level 1 — Experimentation Individuals using AI occasionally and inconsistently. No shared workflow. No strategic intent. Results are unpredictable.


Level 2 — Adoption Teams using AI regularly but independently. Still single-tool dependent. Starting to see the slop problem. Beginning to ask better questions about governance and risk.


Level 3 — Operationalization Organizations deploying AI against specific, defined problems with intentional workflows, clear risk frameworks, and measurable outcomes. AI as strategic infrastructure — not productivity accessory.

Most organizations are somewhere between Level 1 and Level 2.


The gap between Level 2 and Level 3 is not more tools. It is more clarity — about problems, about workflows, about risk, and about what AI can and cannot do.



The Starting Point


Before you can operationalize AI across a team or an organization, you need to know where you actually stand.

Not where you think you stand. Where the evidence says you stand.

That means honestly assessing:

  • Whether your team has a shared understanding of AI tools and their differences

  • Whether your workflows are intentional or accidental

  • Whether your risk profile — regulatory, reputational, operational — has been mapped to your AI use

  • Whether AI is creating genuine strategic leverage or just faster versions of the same outputs



I've built an AI Readiness Quiz specifically for leaders and organizations asking these questions.


It takes less than five minutes. It gives you a clear picture of where you are across five dimensions of AI readiness — and where the highest-leverage opportunities are for your specific context.


[Take the AI Readiness Quiz → futureworkforcesystems.com]

What's yours?


I opened this article with the question from Louisville AI Week: what problem are you actually trying to solve?


I'd add one more now: what does your AI workflow actually look like — and is it working?


Drop your answer in the comments. I read every one.


The leaders figuring this out are not keeping it to themselves. The best thinking in this space is happening in conversation — not in isolation with a single tool.


Future Workforce Systems partners with leaders and organizations building intelligent, human-centered AI strategies. We work at the intersection of workforce design, AI adoption, and organizational readiness.


Follow for weekly insights on the future of work, AI strategy, and what operationalizing intelligence actually looks like in practice.


Ready to know where your organization stands? [Take the AI Readiness Quiz →]



Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page