HOW TO USE THIS GUIDE
Who this is for: Business owners and senior leaders at small and medium businesses — the kind running 5 to 50 people, growing past their current operational ceiling, and trying to figure out how AI actually fits into how they run.
What you'll walk away with: A scored diagnosis of your five highest-friction points, mapped to a clear deployment sequence you can act on immediately.
What this is not: Theory. Hype. A vendor pitch disguised as research.
The data in this guide comes from McKinsey's largest global studies into AI adoption, conducted in 2025 and 2026. The framework, the diagnostic, and the deployment blueprint are ours — built from the experience of designing and deploying AI systems ourselves, including the agency you're reading this from.
Work through it honestly. The score at the end is only useful if it's accurate.
SECTION 01 — THE STATE OF PLAY
The Paradox
Every business owner we speak to has the same conversation.
They're using AI. Their team is using AI. They've watched the demos, read the headlines, maybe paid for three or four subscriptions. They believe — genuinely believe — that this technology is going to change how their business operates.
And yet nothing has fundamentally changed.
Revenue is the same. Operational drag is the same. The same bottlenecks, the same manual processes, the same team working just as hard as they were eighteen months ago.
This is not a small business problem. According to McKinsey's 2025 global research into AI adoption — the largest study of its kind, spanning thousands of executives and employees — the gap between AI investment and AI outcomes is the defining business challenge of this moment.
The technology is not the issue. Your belief in it is probably justified. The gap is in the infrastructure between the two.
Most businesses are sitting in the 99%. Not because they lack ambition. Because ambition and architecture are two different things.
A pilot that never scales isn't a strategy. It's expensive procrastination.
The businesses that break out of this pattern are not necessarily the ones with the biggest budgets or the most technical teams. They're the ones that stopped adding AI tools and started building AI infrastructure. There's a difference — and it's the difference that determines whether AI shows up on your bottom line or just your subscription bill.
Your people aren't waiting for permission. They're already using the Giants — ChatGPT open in one tab, your CRM in another, copy-pasting between the two and calling it a workflow. They want this to work. What they're missing is the infrastructure to make it count.
The Reframe
Here is the honest diagnosis, drawn from both the global research and from building AI systems ourselves:
This is not a technology problem. The models are extraordinary. Your team is ready. What's missing is the operational layer that connects AI to how your business actually runs.
For large enterprises, this is a transformation programme. For a small or medium business, it's a more targeted, more achievable, more immediate opportunity — and one that most of your competitors haven't moved on yet.
The rest of this guide is a diagnostic. It will tell you exactly where your friction lives, score your current readiness, and show you the five moves that close the gap — in sequence, in plain language, without a six-figure consulting retainer attached.
The window is real. The urgency is real. Let's find out where you stand.
SECTION 02 — WHY MOST AI DEPLOYMENTS STALL
The Five Drags
There is a reason two thirds of businesses are still running pilots two years in. It is not bad luck and it is not lack of effort. It is five specific, identifiable, fixable problems — and they tend to appear in combination.
We call them The Five Drags.
Each one is an operational friction point. None of them are technical failures. They are infrastructure failures — gaps between where AI lives and where work actually happens. Identify yours and you have your roadmap.
DRAG 01 — The Disconnected Stack
"You're using ChatGPT in one tab and your CRM in another. Your team is copy-pasting between tools. That's not AI — that's extra admin."
Most small businesses don't have an AI problem. They have a tab problem.
AI tools get adopted at the individual level — someone discovers ChatGPT, starts using it to draft emails, maybe writes a few social posts with it. It saves them time personally. Then nothing changes at a business level, because the tool exists beside the workflow rather than inside it.
The result is invisible friction. Staff switch context a dozen times a day, manually bridging the gap between their AI tool and the system where the work actually lives. It feels like progress because something new is being used. But the underlying process hasn't changed — it's just acquired an extra step.
The fix is not a new tool. It is connection. AI that is embedded into the exact point where friction occurs — inside the CRM, inside the inbox, inside the booking system — gets used. AI that lives in a separate window gets abandoned.
Real-world pattern: A regional property management firm had three members of staff using ChatGPT daily to draft maintenance notices, tenancy renewal letters, and inspection reports. Each one was writing prompts from scratch, copy-pasting outputs into their property management software, then manually formatting them to match house style. The AI was saving individual minutes. It was costing the business an hour a day in aggregate — and producing inconsistent outputs. One connected workflow, built directly into the software they already used, replaced all three parallel processes in an afternoon.
DRAG 02 — The Knowledge Vacuum
"Your AI assistant doesn't know your pricing, your clients, your standards, or your market. So it gives you answers that could belong to any business anywhere."
This is the drag that kills trust fastest — and it is the most misunderstood.
When a business owner asks an AI tool a question and gets a generic, slightly-wrong, confidently-delivered answer, the instinct is to blame the technology. "It's not smart enough for our industry." In most cases, that's not the problem. The problem is that the AI has no idea who you are, what you charge, how you work, or what good looks like in your context.
Large language models are trained on the internet. They know a lot about accountancy in general. They know nothing about your fee structures, your standard engagement terms, your recurring client roster, or the specific way your practice handles year-end. Without that context, they will produce the business equivalent of a very well-spoken stranger who has just read a Wikipedia article about your industry.
The solution is a knowledge layer — a structured, queryable version of your business's institutional knowledge that AI can access securely when it needs to respond. This is what turns generic into specific. It is the difference between an AI that sounds like it works for you and one that sounds like it was built for everyone else.
McKinsey describes this failure as "generic outputs: systems not designed to learn and apply organisational standards." For a small business, the cost is not just bad answers — it is the erosion of the trust that would make AI genuinely useful.
Real-world pattern: A small law firm piloted an AI assistant for client communications. Partners rejected it within two weeks because every draft felt like it had been written for a different firm — wrong tone, wrong jurisdiction references, wrong fee language. The AI wasn't broken. It just didn't know the firm. A knowledge layer built from their standard precedents, fee schedules, and communication guidelines changed the output quality overnight. The same partners who rejected it were using it daily within a month.
DRAG 03 — The SOP Wall
"Your best processes live in your head. Until they're documented as step-by-step specifications, no AI can run them — and no new hire can either."
This is the drag most business owners don't see coming — because it has nothing to do with AI.
Before you can automate a process, you have to be able to describe it. Not generally. Specifically. Step by step, decision by decision, with clear criteria for what a good outcome looks like at each stage. Most SMB processes have never been documented to that level of detail because they didn't need to be — they lived in the head of the person who built them.
AI requires the same specification that a good employee handbook requires, taken one level further. It needs to know: what triggers this process? What information is needed to start? What are the steps in sequence? What does done look like? What are the exceptions? Without that, even the most powerful AI system is guessing — and a guessing AI in a business-critical workflow is worse than no AI at all.
The SOP Wall is actually a gift in disguise. The businesses that clear it to deploy AI end up with documented, teachable, scalable processes that improve operations entirely independently of the AI. The documentation has value on its own. The AI just makes it inevitable.
Real-world pattern: A building contractor wanted to automate their quoting process. Discovery revealed that the quoting process existed only in the owner's head — shaped by fifteen years of experience, adjusted by intuition on every job. There was no written specification, no pricing logic, no documented exceptions. Before any automation could be built, the process had to be mapped. That mapping exercise took four hours and produced a document the owner said was "the most useful thing we've made in years" — because it meant anyone in the business could now produce a quote, not just him. The AI workflow came second. The process design came first.
DRAG 04 — The Trust Gap
"It gave one bad answer. Now nobody uses it. Or everyone uses it uncritically and nobody checks the output. Neither scales."
McKinsey's 2026 research on AI experiences identifies a pattern that will be immediately recognisable to anyone who has watched a team interact with an AI tool:
"Users oscillate between accepting outputs uncritically or abandoning tools when results are disappointing."
Both failure modes are expensive. Uncritical acceptance produces errors that erode client trust. Abandonment after a single bad experience means the investment produces nothing.
The underlying problem is that trust with AI, exactly like trust with a new team member, has to be built deliberately. It does not emerge automatically from good technology. It emerges from designed collaboration — clear protocols for when to use AI, what to check, what authority exists to override it, and what feedback loop exists to improve it over time.
For small businesses this plays out practically: a receptionist who used AI to draft an appointment reminder that went out with the wrong date. A salesperson whose AI-generated proposal contained a competitor's product name. An accountant whose AI summary miscategorised a transaction. One incident, no protocol for correction, tool abandoned. The investment written off. The problem never actually diagnosed.
The fix is not better AI. It is a deliberate handoff design — knowing exactly where human judgement stays in the loop and building that into the workflow from day one, not as an afterthought.
DRAG 05 — The Pilot Trap
"You tried it for three weeks. It didn't stick. You still have the subscription. Sound familiar?"
This is the most common drag and the most demoralising one, because the businesses that fall into it are usually the ones trying hardest.
They run a genuine pilot. It works in controlled conditions. The team is engaged during the test period. Results look promising. And then — nothing ships. The pilot runs for another month. And another. Someone asks about it in a meeting and there is a vague answer about "still evaluating." A year later it is still a pilot.
The path from proof-of-concept to production is not a technology problem. It is an ownership problem. There is no named person responsible for making it live. There is no defined production criteria — no agreed standard for what "good enough to deploy" looks like. There is no go-live date. And without those three things, the pilot floats in permanent evaluation mode while the business continues running on the manual process it was designed to replace.
McKinsey's data makes this structural: two thirds of companies haven't begun scaling AI across their organisation. Not because the AI doesn't work. Because the transition from "this works in a test" to "this runs in the business" requires an operational decision, not a technical one. Someone has to own it. Someone has to ship it.
The single most useful thing you can do before your next AI pilot begins: Write down the name of the person who will make the go-live decision, the date by which that decision will be made, and the three criteria that will determine the outcome. That document — not the technology — is what separates a pilot from a product.
Which of these five sounds most like your business right now?
Most organisations carry at least two. Some carry all five simultaneously, which is why AI feels like it never quite delivers despite genuine effort and genuine investment.
The next section will help you score exactly where you stand — and identify which Drag is costing you most.
THE FIVE DRAGS
01
The Disconnected Stack
Multiple AI tools running in parallel with no shared data layer, context, or memory. Every tool starts from zero — and the seams between them are invisible costs your team absorbs manually.
02
The Knowledge Vacuum
The AI doesn't know what your business knows. No client history, no pricing logic, no service specifics — just generic outputs from generic inputs that require human correction before use.
03
The SOP Wall
Your processes exist in people's heads, not in a format AI can follow. You can't automate what isn't documented. The wall isn't technology — it's the absence of a teachable process.
04
The Trust Gap
The team doesn't trust AI enough to act on its outputs without checking everything. Audit fatigue sets in. The tool gets quietly abandoned, surviving only as a line on the subscription bill.
05
The Pilot Trap
A use case works in testing but never reaches production. Perfect becomes the enemy of operational. Months pass. The pilot is still a pilot. The organisation concludes AI 'doesn't quite work' for them.
SECTION 03 — THE DIAGNOSTIC
Where Are You Right Now?
Reading about the Five Drags is useful. Knowing which ones apply to your business is actionable.
The following twelve questions are designed to give you an honest picture of where you stand. There are no trick questions and no right answers to perform. Score yourself on what is actually true today, not what you intend to build or what you tried once six months ago.
How to score:
- -0 — No. This isn't in place.
- -1 — Partially. It exists but it's inconsistent, informal, or used by some people some of the time.
- -2 — Yes. This is genuinely how the business operates.
Work through each section. Add your scores as you go. Your total is at the end.
DRAG 01 — Stack Integration
Q1. Does your AI tooling connect directly to the software your team uses every day — or do they have to open a separate window, tab, or application to access it?
Q2. Can you name one workflow in your business that runs from trigger to completion without a human manually copying information from one system to another?
Your score for this section: ___ / 4
DRAG 02 — Knowledge Architecture
Q3. If someone on your team asked your AI assistant a specific question about your business right now — your pricing, your service standards, a current client situation — would it give a useful, specific answer? Or a generic one that could apply to any business?
Q4. Does your business have a structured knowledge base — documented processes, client information, pricing, standards — that is queryable by AI and kept up to date?
Your score for this section: ___ / 4
DRAG 03 — Process Design
Q5. Pick your single most repetitive business process — quoting, onboarding, responding to enquiries, whatever takes the most time. Has it been mapped, step by step, with documented decision criteria and defined outputs?
Q6. If a new team member started tomorrow and needed to complete that process without asking anyone for help, is there a document that would let them do that?
Your score for this section: ___ / 4
DRAG 04 — Human-AI Trust
Q7. Does your team have a clear, shared understanding of which AI outputs get reviewed before going out — and who is responsible for that review?
Q8. In the last thirty days, has a bad AI output caused either an error that reached a client, or a team member to stop using the tool entirely?
(Score Q8 inversely — 0 if yes this happened, 1 if unsure, 2 if no.)
Your score for this section: ___ / 4
DRAG 05 — Scale Readiness
Q9. Do you have any AI pilots or experiments that have been running for more than three months without becoming part of your standard operating process?
(Score inversely — 0 if yes, 2 if no.)
Q10. Is there a named person in your business — not "the team" or "we" — who is specifically responsible for your AI operational roadmap and accountable for outcomes?
Q11. Can you currently measure the business impact of your AI tools — time saved, errors reduced, revenue influenced — with real numbers, not impressions?
Q12. If your business doubled in volume tomorrow, could your current AI infrastructure handle the additional load without manual intervention?
Your score for this section: ___ / 8
YOUR TOTAL SCORE: ___ / 20
0 – 5 — Pre-Foundation
The building blocks aren't in place yet. That's not a criticism — it means you're reading this at exactly the right moment. The businesses that move fastest from here are the ones that don't try to fix everything at once. Pick your highest-friction Drag and start there. One thing, done properly, changes more than five things done halfway.
6 – 12 — Intent Without Infrastructure
You're in the majority — and the most fixable position. You have real AI ambition, probably some tools already running, and a team that's open to change. What's missing is the connective tissue: the knowledge layer, the process documentation, the handoff design that turns individual tools into a functioning system. This is exactly the gap we work in.
13 – 17 — Traction, Not Scale
You've made genuine progress. One or two Drags are clearly resolved. The others are costing you disproportionately — and at your level of operational maturity, they're more visible and more fixable than they were earlier. You're not starting from scratch. You're optimising a foundation that already exists. The question is which Drag to close next, and in what sequence.
18 – 20 — Rare Territory
Fewer than one in twenty businesses score here honestly. You've either already been through a structured AI implementation — or you've thought about this more carefully than most. Either way, the conversation we'd want to have with you is different. It's not about fixing Drags. It's about what compounds next.
Make a note of your lowest-scoring section. That's your primary Drag. The next section maps the five deployment moves that close each one — starting with the one that matters most to you.
SCORING GUIDE — WHAT YOUR TOTAL MEANS
0–5
Pre-Foundation
AI is aspirational. The infrastructure required to make it operational does not yet exist. Focus on process documentation and stack clarity before any AI investment.
6–12
Intent Without Infrastructure
Genuine enthusiasm, real tools — but no connective tissue. You're buying AI products without building AI infrastructure. You need a connected deployment, not more subscriptions.
13–17
Traction, Not Scale
Real progress. AI is working somewhere in your business. The remaining Drags are costing you compounding returns. Identify the highest-friction gap and close it next.
18–20
Rare Territory
You're operating in the 1%. The gap is closed — now focus on depth. The returns from AI infrastructure compound just like SEO: the more you build, the more you earn from what's already running.
SECTION 04 — THE DEPLOYMENT BLUEPRINT
Five Moves That Close the Gap
Knowing which Drags are slowing you down is half the work. This section is the other half.
What follows is one deployment move for each Drag — not a theoretical framework, but a practical description of what closing each gap actually looks like in a small or medium business. You don't need to run all five at once. In fact, trying to is one of the most reliable ways to run all five badly.
Start with your lowest-scoring section from the diagnostic. Run that move first. Then the next.
The sequence matters because the moves build on each other — a connected stack is more valuable once it has a knowledge layer; a knowledge layer is more powerful once your processes are documented. Done in order, the five moves compound. Done in parallel, they compete for attention and none of them finish.
MOVE 01 — Embed, Don't Add
Closes Drag 01: The Disconnected Stack
The reframe: Stop evaluating AI tools on their own merits. Evaluate them on whether they can live inside the workflow your team already uses — not beside it.
The single most reliable predictor of whether an AI tool gets used is not how capable it is. It is whether using it requires a context switch. If your team has to stop what they are doing, open something else, do a thing, then come back — they will not do it consistently. Humans are not lazy. They are efficient. A tool that adds steps will be deprioritised in favour of the familiar process, every time.
The move is to audit your highest-friction workflows and ask one question about each: where, exactly, does the manual effort live — and is there an integration that puts AI at that exact point?
Most modern business software has an API, a native AI feature, or a connection layer that makes this possible without custom development. The building blocks already exist. The gap is usually not technical — it is that nobody has mapped the friction point and matched it to the available connection.
What this looks like when it's working: A team of five handling client communications stops using ChatGPT as a separate drafting tool and starts using an AI assistant embedded directly in their inbox — trained on their tone, their templates, their standard responses. Same AI capability. Zero context switching. Usage goes from occasional to constant within a week because the tool is now in the path of the work, not beside it.
MOVE 02 — Build the Memory Layer
Closes Drag 02: The Knowledge Vacuum
The reframe: Your AI is only as intelligent as what you've taught it about your business. An AI that doesn't know you is just a fast way to produce the wrong answer confidently.
This is the move that transforms generic outputs into specific, trusted ones — and it is more achievable for small businesses than most people assume. You do not need enterprise infrastructure. You need a structured, accessible version of the knowledge your business already holds: your pricing, your processes, your client information, your standards, your voice.
The technical term for this is RAG — Retrieval-Augmented Generation. The plain-English version is: you give AI a filing cabinet of everything it needs to know about your business, and it consults that filing cabinet before it answers. The model's general intelligence handles the reasoning. Your knowledge layer handles the specificity.
For an SMB, this often starts small: a document containing your service catalogue and pricing, your most common client questions and preferred answers, your brand voice guidelines. Fed into the right system, this alone closes the majority of the generic-output problem.
What this looks like when it's working: A medical practice builds a knowledge layer from their appointment protocols, FAQ responses, and patient communication standards. Their AI assistant — handling enquiry responses and appointment confirmations — stops producing generic health-service language and starts producing responses that sound like they came from that practice specifically. Staff stop editing every output. Trust builds. Usage expands.
MOVE 03 — Design the Handoff
Closes Drag 03: The SOP Wall
The reframe: Process documentation is not a prerequisite for AI — it is a prerequisite for scale. AI just makes the deadline urgent.
The businesses that deploy AI fastest are not the ones with the most advanced technology. They are the ones that have already done the unglamorous work of writing down how things get done. If your most important processes exist only in someone's head, no AI system can run them — and neither can a new hire, a covering colleague, or anyone else when that person is unavailable.
This move has two stages, and the first stage has nothing to do with AI at all.
Stage one is process mapping: sit with the person who runs the process, walk through it step by step, document every decision point, every input, every output, every exception. The output is a specification — not a flowchart, not a policy document, but a clear step-by-step description that a capable person with no prior context could follow to produce the right outcome.
Stage two is translating that specification into an agentic task design — the format an AI system needs to execute it. This is where FWG works. The process mapping is collaborative. The translation is ours.
McKinsey's principle here is direct: "Build for depth — automate entire workflows, not just individual answers." A single embedded, end-to-end automated process delivers more value than a dozen AI tools used ad hoc.
What this looks like when it's working: A financial planning firm maps their client review preparation process — previously a three-hour manual exercise per client involving six different data sources. The mapped process becomes an automated workflow: data pulled, summarised, formatted, and pre-populated into the review template before the adviser touches it. Preparation time drops to twenty minutes. The adviser spends the saved time on the conversation, not the admin.
MOVE 04 — Wire in the Human
Closes Drag 04: The Trust Gap
The reframe: The goal is not to remove human judgement from the workflow. It is to place it at exactly the right moment — where it adds the most value and where the cost of error is highest.
This is the move most businesses get wrong in both directions. Either they put humans in the loop everywhere — which makes the AI pointless — or they remove humans entirely and discover the hard way where the edge cases live.
Designed collaboration means asking a specific question for each AI-assisted workflow: at what point does a human need to see this before it goes further? Not everywhere. Not nowhere. At the specific moment where context, judgement, or client relationship matters more than speed.
For most SMB workflows, this is not a complex design problem. It usually resolves to: AI drafts, human approves before it goes external. AI analyses, human decides before it goes to the client. AI flags, human investigates before anything changes. The handoff point is usually obvious once you ask the question explicitly — the problem is that most businesses never ask it, and so the workflow either has no human checkpoint (dangerous) or too many (pointless).
Once the handoff is designed and the team understands their role within it — reviewer, not author; decision-maker, not executor — trust builds naturally, because the AI is no longer a black box producing final outputs. It is a capable colleague doing the first pass, with a human in the position to catch anything that needs catching.
What this looks like when it's working: A recruitment firm uses AI to screen inbound applications and draft initial candidate summaries. The rule is simple: AI produces the summary, a consultant reviews it before any candidate communication goes out. The AI handles the volume. The consultant handles the judgement. Response times drop from three days to three hours. Error rate is lower than the manual process it replaced, because the reviewer is now checking a structured summary rather than reading from scratch.
MOVE 05 — Ship the Pilot
Closes Drag 05: The Pilot Trap
The reframe: A pilot that hasn't shipped is not a cautious strategy. It is an unfinished decision.
This move is the simplest to describe and the hardest to execute, because it requires something that has nothing to do with technology: a decision.
Specifically, three decisions made before the pilot begins rather than after it ends.
Decision one: the owner. Who is the named person responsible for making this live? Not "the team." One person. Their name is on the deployment. They have the authority to say go.
Decision two: the criteria. What does good enough look like? Define it specifically before you start — not "we'll know it when we see it." Three measurable criteria. When two of three are met, it ships.
Decision three: the date. Not "when it's ready." A calendar date by which the go/no-go decision will be made. If it ships, it ships. If it doesn't, the explicit decision is made to kill it — not to extend the pilot indefinitely with no end in sight.
These three decisions cost nothing and take twenty minutes. They are the difference between a pilot and a product. Between investment and write-off. Between an organisation that is building operational AI capability and one that has been "exploring AI" for the better part of two years.
What this looks like when it's working: Friends with Giants built our own agency — this website, this guide, our operational workflow, our brand system — using AI systems we designed and deployed ourselves. Brief to live: eight weeks. Owner: one person. Criteria: defined upfront. Date: fixed. What a traditional agency would have quoted at significant cost and several months was delivered in a fraction of the time, because the AI wasn't evaluated indefinitely — it was deployed deliberately. That's not a unique capability. It's a decision.
Five moves. One primary Drag. Start there.
The businesses that close the execution gap are not the ones with the most advanced AI. They are the ones that picked one problem, solved it completely, and moved to the next. Compounding operational improvement is slower to start and dramatically faster to finish than attempting everything at once.
The final section tells you what your next move looks like in practice.
THE DEPLOYMENT BLUEPRINT — FIVE MOVES IN SEQUENCE
01
Embed, Don't Add
Connect AI into your existing tools and workflows rather than building a parallel AI layer. AI inside the stack beats AI beside it — every time. The goal is invisible infrastructure, not visible effort.
02
Build the Memory Layer
Create a knowledge base from your SOPs, client data, pricing, and precedents. This is what transforms generic AI into your AI — one that understands your business, not just the English language.
03
Design the Handoff
Define precisely where AI hands off to humans — and what context travels with it. The handoff is where trust is won or lost. A bad handoff erodes more confidence than a bad AI output.
04
Wire in the Human
Build approval checkpoints and human review loops before any external-facing output. Autonomy should be earned incrementally through demonstrated reliability — not assumed from day one.
05
Ship the Pilot
Deploy to a limited scope with real stakes. A real deployment with constraints teaches more than any amount of controlled testing. Constrained and live beats perfect and theoretical.
SECTION 05 — YOUR NEXT MOVE
The Choice Point
Here is where most guides end with a list of recommendations.
This one ends with a single question.
You have read the research. You have scored yourself against the Five Drags. You know which one is costing you most. The gap between where you are and where you need to be is not a mystery — it is a specification. You can see it clearly now.
So the only question that matters is the same one that separates the 1% from the 99%:
Are you going to act on it, or file it?
McKinsey's data is unambiguous on this point. Ninety percent of leaders expect AI to drive growth in the next three years. Nearly seventy percent of business transformations fail — not because the technology doesn't work, but because the decision to act decisively never quite gets made. The pilot stays a pilot. The gap stays a gap. The competitor who did make the decision compounds quietly while everyone else waits for the perfect moment.
The perfect moment was twelve months ago. The next best moment is now.
If your audit score was 0–5: You don't need a roadmap yet. You need one thing, done properly. Tell us your primary Drag. We will tell you exactly where to start.
If your audit score was 6–12: You have intent without infrastructure. The question is which Drag to close first and in what sequence. That conversation takes thirty minutes.
If your audit score was 13–17: You have a foundation. Let's talk about what to optimise next — and what compounds after that.
If your audit score was 18–20: You are already building. Let's talk about what comes next — the systems that don't just automate what you do today, but change what becomes possible tomorrow.
Whatever your score: the operational infrastructure you build in the next twelve months will determine where your business stands for the next five years. Your competitors are running the same pilot they started two years ago.
You don't have to.
Tell us where the friction is. We'll show you what's possible.
Book a 30-minute call — no pitch, no pressure, no deck. Explore Operational AI at friendswithgiants.com
Friends with Giants is an AI-first agency. We architect the operational infrastructure that turns AI ambition into measurable business outcomes.
Statistical data cited from McKinsey & Company: Superagency in the Workplace (2025) and Building Next-Horizon AI Experiences (2026). All rights reserved by McKinsey & Company.