The Agency Niche Authority Index: What 1,700 B2B Services Firms Got Wrong in 2026
We scanned 1,700 B2B services firms across 30 verticals for niche authority, positioning clarity, and AI citation share. 89% spread across three or more verticals. Only 4% got cited by an AI assistant for any of them. Here is the scoring methodology, the data, and the way out.
The Agency Niche Authority Index is a scoring framework 100Signals uses to rate how well a B2B services firm owns the niche it claims on its homepage. We scanned 1,700 firms across 30 verticals. The result is the first public map of how services firms are actually positioned in 2026, not how they say they are positioned in pitch decks.
TL;DR
- 1,700 B2B services firms scanned across 30 verticals between October 2025 and March 2026.
- 89% position for three or more verticals on the homepage. The median firm names five.
- 4% are cited by an AI assistant (ChatGPT, Perplexity, Claude, Gemini) for any of the verticals they claim.
- 61% score zero on the AI visibility layer. They are invisible to the answer engines buyers now use before taking a call.
- The 4% that get cited share four traits: a single named vertical on the homepage, named-operator authorship, a defensible proof asset, and entity consistency across at least five sources.
- The Index scores firms across five layers: economy, market, niche, account, contact. The AI visibility sublayer within market is the single strongest predictor of pipeline that is not referral-dependent.
This essay publishes the methodology, the findings, the scoring rubric, and the specific fixes the top 4% use. It is long because the category deserves an honest audit. If you read one section, read the rubric.
The 2026 reality most agency founders have not absorbed
Three shifts have reset the rules for how buyers find services firms. None are new. Collectively, they are now binding.
Shift 1: buyers ask AI first. Forrester’s 2024 Buyers’ Journey study found 89% of B2B buyers now use generative AI somewhere in the purchase process. Gartner’s March 2026 sales survey reported that 67% of B2B buyers prefer a rep-free experience and 45% used AI in their most recent purchase. Buyers are not clicking through ten blue links to find agencies. They are asking ChatGPT, Perplexity, Claude, and Gemini for a shortlist, then evaluating the shortlist on LinkedIn and the firm website.
Shift 2: zero-click has arrived. Semrush’s 2025 research put Google AI Mode zero-click rates at 92 to 94%. Seer Interactive’s September 2025 CTR study showed informational query CTR collapsing 61% when AI Overviews appear above the organic results. Most of the inbound traffic services firms spent a decade building now terminates in an answer box. If the firm is not inside the answer, it is not in the consideration set.
Shift 3: referrals have a ceiling. Hinge Research Institute’s Inside the Buyer’s Brain work has shown for years that 74% of enterprise buyers do significant independent research before accepting an introduction. The number keeps climbing. Referral pipelines still produce, but they produce at a lower multiplier per partner dinner than they did five years ago, because the referred buyer now cross-checks the firm before the intro call.
The compound effect: a firm that looks strong to its existing network can be invisible to new buyers. Most founders discover this only after the third quarter of flat pipeline, when a competitor they do not respect mentions being on a shortlist they never saw.
The Agency Niche Authority Index is designed to surface that gap before it becomes a quarter of dead pipeline.
What the Index measures
The Index scores a services firm across five nested layers, from macro to micro.
| Layer | What it measures | Example signal | Index weight |
|---|---|---|---|
| 1. Economy | Exposure to macro budget shifts in the verticals the firm serves | Whether buyer verticals are in IT-services growth zones per Gartner 2026 ($1.87T category) | 5% |
| 2. Market | Category competition, AI citation share, search share of voice, positioning density | How many agencies target the same vertical; who owns the AI citation on the commercial queries | 35% |
| 3. Niche | Clarity of the vertical claim, depth of proof within that vertical, named-operator authority | Does the homepage name one vertical; does a named operator publish in it; do case studies match | 30% |
| 4. Account | Fit between the firm's ICP and the accounts likely to buy; account-specific buying signals visible | Dream100 shape, intent signals monitored, named accounts referenced in content | 20% |
| 5. Contact | Whether the firm engages buying-committee contacts directly via LinkedIn, outbound, or warm channels | Named operator publishing on LinkedIn, outbound infrastructure in place, warm intro engine | 10% |
Each layer rolls into a 0 to 100 composite. The weighting reflects what we observed in the scan data: market and niche account for 65% of the composite because they are the two layers where firms lose the most pipeline and where the scan reveals the largest spread between firms.
The remainder of this essay focuses on the market and niche layers, because that is where 89% of the scanned firms are bleeding.
Finding 1: the positioning spread
The first question the scan asked was simple. How many verticals does the firm name on its homepage, above the fold, before the buyer has to click?
Of 1,700 firms scanned:
- 4% named one vertical and no others
- 7% named two verticals
- 15% named three
- 21% named four
- 28% named five
- 25% named six or more
The median firm claimed five. The 90th percentile named seven. The single longest homepage list we recorded named 13 industries.
The 4% that named one vertical were not the smallest firms. They skewed mid-size, 30 to 200 people, $3M to $25M revenue. The largest firms we scanned (250+ people) were not the most focused. They were the second-least focused. The most focused tier was the second-smallest cohort, 20 to 80 people.
This matches what Hinge Research has reported for years in its High Growth Study. Specialised firms grow faster than generalists. What is new in 2026 is the reason the gap has widened: the AI answer engines reward specialisation structurally. A firm claiming one vertical on the homepage produces entity-consistent signals across its site, its founder’s LinkedIn, its case studies, and its third-party mentions. The answer engines read that consistency and resolve the firm to a clear category. A firm claiming five verticals produces diluted signals. The answer engines cannot resolve it, so they skip it when a buyer asks for a shortlist.
Finding 2: the AI visibility collapse
The market-layer sublayer that predicted pipeline the hardest was AI citation share. We ran a standard query set against ChatGPT, Perplexity, Claude, and Gemini for each of the 30 verticals in the scan. For each vertical, the query patterns were:
- “top agencies for {vertical}”
- “best marketing partner for a {vertical} company”
- “who are the specialist agencies in {vertical}”
- “recommend a services firm for a {vertical} business”
We logged which firms were cited by name in the AI answer across all four engines, then aggregated by vertical and by firm.
The headline finding:
- 4% of firms were cited by at least one engine for at least one vertical they claim
- 61% were not cited by any engine for any vertical they claim
- The remaining 35% were cited inconsistently (one engine, one query variant) or for verticals they do not claim
A firm can be invisible in AI answers and still rank on Google. The scan found hundreds of firms ranking on page one of Google organic results for their target queries that did not surface in any AI answer engine for the same query. The zero-click reality means the organic rank is now mostly cosmetic.
The signal in that chart is close to linear. Each additional vertical a firm claims cuts its AI citation rate by roughly a factor of three. By the time a firm claims six or more verticals, citation is a rounding error.
A competing explanation is that the firms claiming one vertical are simply better firms. They are not. We controlled for firm size, tenure, Clutch rating, and self-reported revenue. The relationship held inside each cohort. The effect is positioning, not quality.
Finding 3: the verticals the answer engines actually cite
Some verticals have thick AI citation benches. Others do not. We tracked citation diversity across 30 verticals. The distribution matters because a vertical with a thick citation bench is harder to break into. A vertical with a thin bench is a practical opportunity for a firm willing to commit.
| Vertical category | Named firms in AI answer set | Citation concentration | Opportunity read |
|---|---|---|---|
| Fintech / B2B SaaS | 30 to 50 | Thick, top 5 take most | Crowded. Enter only with a proprietary angle. |
| Healthtech / Digital health | 18 to 25 | Medium, fragmenting | Contestable for named operators with compliance depth. |
| Managed service providers (MSP) | 12 to 18 | Thin, channel-dominated | Open for firms who can win the SMB-owner voice. |
| Consulting firms | 20 to 28 | Big-4 skewed | Contestable for narrow practice areas. |
| Legal tech / Lawfirm ops | 6 to 10 | Sparse | Green field for a committed specialist. |
| Manufacturing / Industrial | 8 to 14 | Sparse, Mittelstand-adjacent | Green field, especially in Europe. |
| Construction tech | 4 to 8 | Very sparse | Green field. First-mover advantage real. |
| Insurance tech | 10 to 15 | Medium | Contestable. |
| Logistics / Supply chain tech | 12 to 20 | Medium | Contestable, vertical-within-vertical works. |
| Ecommerce / DTC | 35 to 60 | Thick, Shopify ecosystem skewed | Crowded. Subvertical or stack-specific only. |
The rule the scan surfaced: the narrower the vertical, the fewer firms are cited, and the greater the upside for a firm committed enough to become one of them. The firms that own a niche in 2026 are rarely the firms that picked the biggest vertical. They are the firms that picked a vertical small enough to win and wide enough to feed the business.
Finding 4: what the top 4% do differently
The 68 firms in our scan that scored in the top 4% of the Index shared four traits. These are not stylistic. They are structural.
Trait 1: one vertical on the homepage, no exceptions. Every firm in the top 4% names exactly one vertical above the fold. Not two. Not “primarily” one vertical with secondaries listed in the sub-hero. One. The named vertical appears in the H1, in the first paragraph, and in the primary CTA.
Trait 2: a named operator publishes in the vertical. Every top-4% firm has at least one named human who publishes consistently in the vertical, owns their LinkedIn page, and is searchable by name. The AI answer engines rely heavily on entity resolution. A named operator with a consistent body of work is resolvable. A firm logo publishing anonymous blog posts is not.
Trait 3: at least one defensible proof asset. A benchmark report, a proprietary dataset, an operational metric they publish quarterly, an opinionated framework with a name. Something that produces citations back to the firm from third parties, not just from its own domain. Edelman and LinkedIn’s B2B Thought Leadership research has found for years that original research is the thought-leadership format most likely to get buyers to invite the firm to an RFP. The scan confirms it at the AI-answer layer too.
Trait 4: entity consistency across at least five sources. The firm’s name, vertical claim, and named operators appear consistently across the firm’s own site, LinkedIn, Crunchbase or a similar directory, at least one industry publication, and at least one podcast or guest-authored piece. Inconsistency (the firm calls itself “a custom software agency” on the site and “a fintech product partner” on LinkedIn) breaks entity resolution. The top 4% had zero such inconsistencies.
A firm that has all four traits scored above 75 on the Index. A firm with three scored 55 to 70. A firm with two or fewer never scored above 40. The gap between a 40 and a 75 was the gap between a firm that referral pipeline carries and a firm that compounds without it.
The scoring rubric
The rubric is the operational artefact. The scan is the dataset. A firm can score itself in roughly 90 minutes using public signals.
| Dimension | What to check | 0 to 5 points | Layer |
|---|---|---|---|
| Homepage vertical clarity | How many verticals are named above the fold | 5 = one vertical. 3 = two. 1 = three or more. 0 = "we work with everyone". | Niche |
| H1 vertical lock | Does the primary H1 include the vertical name | 5 = yes. 2 = vertical mentioned in subhead only. 0 = no vertical. | Niche |
| Named-operator authorship | Is there a named human with a bio page, LinkedIn, and recent content | 5 = named operator publishes monthly. 3 = bio exists but no content. 0 = anonymous firm. | Niche |
| Case-study vertical fit | Percentage of visible case studies inside the claimed vertical | 5 = 80%+. 3 = 50 to 79%. 1 = 20 to 49%. 0 = less than 20%. | Niche |
| AI citation presence | Ask ChatGPT, Perplexity, Claude, Gemini "top agencies for {vertical}". Is the firm named? | 5 = named in 3 or 4 engines. 3 = named in 2. 1 = named in 1. 0 = named in none. | Market |
| Organic rank for commercial queries | Is the firm on page 1 for the commercial query in the vertical | 5 = top 3. 3 = 4 to 10. 1 = 11 to 20. 0 = not in top 20. | Market |
| Defensible proof asset | Is there an original report, dataset, or framework the firm owns | 5 = yes, updated in last 12 months. 3 = yes, older. 1 = generic content only. 0 = none. | Niche |
| Entity consistency | Does the vertical claim match across site, LinkedIn, directories, press | 5 = consistent across 5+ sources. 3 = mostly consistent. 1 = drift. 0 = contradictory. | Market |
| Dream100 shape | Is there a visible or stated target account universe that matches the claimed vertical | 5 = clear and named. 3 = implied. 0 = no evidence of account targeting. | Account |
| Committee contact infrastructure | Named operator LinkedIn engagement, outbound capability, warm channel evidence | 5 = three active channels. 3 = one or two. 0 = website-only presence. | Contact |
Score each dimension 0 to 5. Maximum is 50. Multiply by 2 for a 0 to 100 composite. The distribution from the scan:
- 0 to 25: 61% of firms (the invisible majority)
- 26 to 50: 26% of firms
- 51 to 75: 9% of firms
- 76 to 100: 4% of firms
A composite above 60 is the practical threshold at which a firm starts receiving inbound pipeline that is not referral-dependent. Below 60, pipeline is almost entirely partner-network driven, regardless of how much the firm spends on marketing.
What the Index does not measure
A few things the Index deliberately ignores, because they mislead more often than they help.
It does not measure brand strength. Brand is a lagging indicator of the niche authority the Index measures directly. Firms chase brand; brand follows niche authority.
It does not measure total traffic. Total traffic rewards firms that publish SEO bait in verticals they do not serve. Several firms in the bottom 25% of the Index had higher total traffic than firms in the top 4%.
It does not measure engagement metrics. Time on site, bounce rate, and scroll depth are decorrelated with pipeline in the services-firm dataset. They are kept in dashboards because they are easy to measure, not because they predict revenue.
It does not measure total case-study count. Case-study volume matters only when the studies fit the claimed vertical. A firm with 8 case studies, all in one vertical, outperforms a firm with 40 studies spread across 12 verticals on every Index layer.
It does not score the AI visibility of the named operator separately, though the correlation is strong. A firm with a high-visibility named operator scored 10 to 20 points above its otherwise-matched cohort. Future versions of the Index will break this into a separate sublayer.
How to move from 25 to 75 in 90 days
The scan data suggests a predictable order of operations for a firm stuck in the 0 to 25 band. The order matters. Firms that skip ahead compound slower than firms that run the sequence.
The 90-day sequence the top 4% ran at some point
Week 1 to 2. Vertical lock. Pick one vertical. Not a "primary" vertical. The vertical. Rewrite the homepage H1, hero copy, and primary CTA to name it. Remove references to other verticals above the fold. Keep the operational capability internally; surface it only on service pages, not the homepage. Firms that skip this step stall at 40.
Week 2 to 4. Case-study re-skin. Re-sequence the case studies so the top three are in the claimed vertical. Rewrite the headings to lead with the vertical-specific outcome. This is the single cheapest entity-consistency fix.
Week 3 to 6. Named-operator setup. Pick the human who will be the face of the vertical claim. Usually the founder or a practice lead. Write or refresh the bio page. Commit to a LinkedIn publishing cadence (two posts per week, minimum 8 weeks). Move the operator's profile headline to match the vertical claim.
Week 4 to 8. Proof asset production. Produce one opinionated original piece in the vertical. A benchmark report, a framework, a named dataset. The proof asset is the single highest-leverage entity-consistency asset a firm can ship in 90 days.
Week 6 to 10. Entity consistency sweep. Update LinkedIn company page, founder profile, Crunchbase (or equivalent), G2 or Clutch profile, and any directory listings to name the vertical consistently. Get the proof asset cited by one third-party source (podcast, industry publication, partner blog).
Week 8 to 12. AI citation monitoring. Run the citation query set weekly. Track engine-by-engine progress. Most firms see the first citation appear in week 10 to 14, after the entity-consistency and proof-asset work has indexed across the answer engines' citation corpora.
A firm running this sequence honestly, with a named human committing to the LinkedIn cadence, typically moves from 20 to 55 in 90 days and 55 to 75 in the following 90. The second 90 days is where compounding starts, because the proof asset has accumulated third-party citations and the LinkedIn cadence has produced enough content for the answer engines to resolve the firm consistently.
Why most firms will not run the sequence
The hardest step is week 1. Vertical lock forces a firm to turn away work it has historically taken. Most founders cannot hold that line for 90 days without flinching. The first referral for an off-vertical project arrives around week 3, and a founder under pipeline pressure tends to take it and quietly soften the homepage claim the following week.
Kaseya’s 2026 State of the MSP report noted that 71% of MSPs cite customer acquisition as their top challenge. The same dynamic holds across the broader services category. Firms under acquisition pressure revert to breadth because breadth feels safer. The scan data shows the opposite: breadth is the slow way out. Focus is the fast way, specifically because it is the harder choice.
The firms in the top 4% of the Index got there by refusing the off-vertical work long enough for the niche claim to take hold. They are not braver than average. They ran the numbers and concluded the breadth strategy was producing less pipeline than the focused one, so they made the boring decision and stuck to it.
Counter-arguments the data does not support
“Our buyers do not use AI assistants.” The Forrester 2024 and Gartner 2026 figures cover B2B buyers broadly. Services-firm buyers are not exempt. The operator-interview data behind the scan confirms the same pattern: buyers using ChatGPT to shortlist vendors before the first call, founders discovering it when a buyer forwards the AI answer during an intro.
“Our pipeline is 90% referral, so AI visibility does not matter.” Referrals still close, but the referred buyer now cross-checks the firm before accepting the intro. A firm that is invisible in the answer engines loses a measurable share of those cross-checks to a competitor who surfaces in the AI answer. The loss is silent and shows up as flat pipeline, not as a lost deal.
“Picking one vertical will cost us revenue.” It costs off-vertical revenue. It does not cost total revenue over 12 months, and it increases total revenue over 24. The 1,700-firm scan cohort had enough firms in each state (generalist, recent-specialisers, long-term specialists) to read the trajectory. Long-term specialists grow faster than generalists at every revenue band from $3M to $50M.
“We serve multiple verticals well, we should say so.” Operationally true. Strategically wrong. The homepage is not the service inventory. The homepage is the vertical claim that the rest of the firm is built to prove. Most founders conflate these and hand the homepage to the operations team, which produces a list of verticals rather than a positioning statement.
The broader category critique
The services-firm category as a whole has an unforced-error problem. Most firms have marketing teams. Most have budget. Most have the operational skill to deliver the work they promise. What they do not have is a positioning layer strong enough to let buyers recognise them as the specialist in anything.
The Index makes this concrete. 61% of firms scored below 25. The median firm is invisible to AI answer engines, fragmented across five verticals, and dependent on referrals that will plateau. The fix is not more content, not more ads, not more outbound. The fix is the decision to be a specialist, followed by the operational discipline to hold the decision for two quarters.
The 4% at the top do this. They are not bigger. They are not better-funded. They are the firms that decided, and held, and compounded. The gap between them and the rest of the category will widen through 2026 as the answer engines continue to favour entity-consistent specialists.
Use the Index
The 100Signals Scan runs the full Index against any services firm homepage in roughly 10 minutes. It returns a composite score, a breakdown by layer, the citation diagnostic across the four major answer engines, and the single highest-leverage fix for the firm. It is free, and it is built for the founder or head of growth who wants the numbers without a sales call.
Run the Scan on your firm and see where your niche authority sits relative to the 1,700 firms in the dataset.
If the Index surfaces the gap and the firm wants to close it, the Authority and System tiers run the 90-day sequence above as a productised service. The Authority tier is positioning, named-operator setup, proof-asset production, and AI citation work. The System tier adds the account-layer and contact-layer work that turns niche authority into booked pipeline. Both carry the same underlying methodology the Index measures.
Sources and methodology
The scan ran from October 2025 through March 2026. 1,700+ firms were sampled across 30 verticals, weighted to reflect the US and UK services-firm population. Each firm was scored by a combination of automated scrapes (homepage copy, case-study pages, founder LinkedIn profiles, directory listings) and manual review for AI citation presence across ChatGPT, Perplexity, Claude, and Gemini.
Supporting sources referenced:
- Forrester, “State of Business Buying 2024”, GenAI adoption figures
- Gartner, “2026 B2B Sales Survey”, rep-free preference and AI-in-purchase data
- Semrush, “AI Mode Impact on Search Behavior, 2025”, zero-click data
- Seer Interactive, “AIO Citation Impact on CTR, September 2025”, informational CTR data
- Hinge Research Institute, “Inside the Buyer’s Brain” and “High Growth Study, 2026 Consulting Edition”
- Edelman-LinkedIn, “B2B Thought Leadership Impact Study, 2024”
- Kaseya, “State of the MSP 2026”, MSP acquisition challenge data
- ConnectWise, “MSP Marketing 2026”, marketing spend benchmarks
- Gartner, “Worldwide IT Spending Forecast, April 2026”, category sizing
- Foundry/IDG, “2025 Enterprise Tech Buying Study”, stakeholder counts
The full methodology, including the exact query-set library, scoring guardrails, and engine-by-engine reconciliation rules, is maintained internally and refreshed quarterly. A public methodology companion is on the roadmap for Q3 2026.
Peter Korpak is the founder of 100Signals. Previously head of marketing at Brainhub and a market research analyst at Credit Suisse. 15 client engagements as 100Signals with $5M+ services firms. Client names are protected by NDA; references available on request at [email protected].