The Humans Upgrade Specialist AI In Practice Go Deeper Let's Talk →

Why Your Offshore Research Team's Output Gets Worse After Year One

Darren SharmaCEO & Founder

Why Your Offshore Research Team's Output Gets Worse After Year One

This article assumes familiarity with offshore research models and explores the structural dynamics behind quality erosion. It draws on industry data to explain why output quality often declines after the first 12–18 months of an engagement.

The Pattern That No One Names

If your offshore research team's output has visibly declined after the first year — or if you've recently lost a lead analyst and noticed their replacement isn't producing at the same level — what you're experiencing is not an isolated setback. It is the most common structural failure mode in offshore research, and it has a predictable cause.

Most offshore research engagements start well. The first cohort of analysts is carefully selected, intensively trained, and closely supervised. Output quality is high. The onshore team exhales. The model works.

Then, somewhere between month 12 and month 24, something shifts. Reports that once required light editing now need rewriting. Analysts who seemed to understand the brief start missing context. The team lead who made the engagement work hands in their notice. A replacement arrives, trained in a week, and the cycle of over-checking begins again.

This is not a story about bad analysts. It is a structural problem — one that is built into the economics of how most offshore research providers operate. It affects outsourced credit research and outsourced equity research in equal measure, because the root cause is not discipline-specific. Understanding why it happens is the first step toward knowing whether it can be prevented.

The Retention-Quality Loop

The offshore knowledge process outsourcing (KPO) industry reports average analyst tenure of approximately 2.2 years. That number is not an anomaly or an outlier. It reflects a stable equilibrium that most providers have settled into, because the economics of high turnover are tolerable at scale — even if the consequences for research quality are not.

Here is how the loop works:

Stage 1: The Investment Phase (Months 1–6)

A new analyst joins. They receive onboarding — typically one to two weeks of orientation, followed by on-the-job learning. For the first several months, the analyst is a net cost. Their output requires significant checking. The onshore team invests time explaining context, correcting methodology, and reviewing drafts.

Stage 2: The Productive Phase (Months 6–18)

The analyst has absorbed enough institutional context to produce useful work. They understand the client's house style, know the portfolio, and can handle routine coverage with minimal supervision. This is where the model delivers value. The onshore team's checking burden drops. Turnaround times improve.

Stage 3: The Departure (Months 18–30)

The analyst, now experienced and marketable, leaves. In most offshore research centres, the competitive labour market in cities like Mumbai, Pune, and Bangalore means that an offshore credit analyst or offshore equity researcher with 18 months of client-facing experience can command a meaningful salary increase by moving to a competitor, a bank's captive centre, or a fintech. The provider's retention tools — modest pay increments, title inflation, lateral moves within the firm — rarely match the external market.

Stage 4: The Reset

A new analyst arrives. The cycle restarts. But this time, the institutional memory that the departing analyst carried — the understanding of why a particular counterparty is structured the way it is, the knowledge of what the portfolio manager actually wants when they ask for a "quick screen," the awareness of which data sources the client trusts — is gone. It left with the person.

This loop is self-reinforcing. Each reset erodes the onshore team's confidence in the offshore model. Confidence erosion leads to heavier checking. Heavier checking makes the offshore analyst's role less autonomous and less intellectually rewarding. Less autonomy accelerates the next departure.

Why Training Depth Is the First Casualty

When turnover is running at 2.2 years average, providers face a rational but damaging calculation: why invest three months training an analyst who will leave in two years?

The industry standard has converged on one to two weeks of onboarding. That is enough time to explain systems access, introduce reporting templates, and provide a high-level overview of the client's coverage universe. It is not enough time to:

  • Walk an analyst through a live coverage initiation from data gathering to publishable draft

  • Teach the difference between a mechanical credit model and one that reflects genuine analytical judgement

  • Develop the ability to explain methodology to a third-party auditor in a way that demonstrates independent understanding rather than rote reproduction

  • Build familiarity with the onshore team's communication norms, decision-making shortcuts, and implicit quality thresholds

The gap between one-week onboarding and deep analytical training is where quality degradation begins. An analyst trained in a week can fill in a template. An analyst trained over three months can exercise judgement about what the template should contain.

The Audit Stress Test

The point at which quality degradation becomes visible — and career-threatening for the onshore manager who championed the model — is often a regulatory or internal audit.

When Third Line asks an offshore analyst to explain their projections, the question is not whether the numbers are correct. It is whether the analyst can demonstrate that the projections are proprietary and explainable — that they understand the assumptions, can defend the methodology, and can articulate why their estimate differs from consensus.

An analyst with 18 months of deep, supervised experience can do this. An analyst with six months of post-onboarding exposure, working from a template they did not build, often cannot. The gap is not intelligence or education. It is institutional depth — the kind that only accumulates through sustained engagement, not through onboarding documentation.

For the onshore research head or risk manager, this is where the personal stakes crystallise. If the offshore analyst cannot answer the audit question, the onshore manager's judgement in approving the outsourcing model is what gets scrutinised. The model that was supposed to reduce cost and increase bandwidth has instead created a governance exposure.

The Middle-Manager Filter

There is a second structural factor that compounds quality degradation, and it operates at the communication layer rather than the analyst layer.

Most large offshore providers insert a middle-management tier between the onshore client team and the offshore analysts. This layer exists for operational reasons — it manages scheduling, distributes work, handles HR issues, and provides a single point of contact for the client. But it also filters communication in ways that systematically degrade quality.

When an onshore portfolio manager gives feedback on a draft — "the tone is too cautious on the recovery assumptions" or "I need the comparable analysis to reflect post-restructuring capital structures" — that feedback passes through the middle manager before reaching the analyst. The nuance flattens. The context that would help the analyst understand why the feedback matters, not just what to change, is lost.

Over time, this filtering effect means the analyst never fully absorbs the client's analytical culture. They execute instructions without developing the judgement to anticipate what the client needs. The work stays technically correct but never becomes genuinely integrated.

The Redeployment Problem

There is a third structural factor that accelerates quality degradation, and it is one that clients rarely see until the damage is done.

In mass-market KPO, experienced analysts and capable team managers are scarce commodities. The provider may employ thousands of analysts, but the ones who have genuinely developed institutional depth — who can run a coverage initiation without hand-holding, who understand a client's risk appetite, who can handle an audit question — are a small subset. These people are disproportionately valuable to the provider, not because of what they do on your account, but because of what they can do on a new one.

When the provider wins a new client, or needs to staff a pilot project that will determine whether a prospect converts, the rational move is to deploy its best people. The analyst or manager who has spent two years learning your portfolio, your house style, and your team's communication preferences is exactly the person who can make a new engagement look credible in its first three months. So they get moved — sometimes visibly, sometimes through a gradual rebalancing of workload that only becomes apparent when output quality on your account drops.

This is not a bait and switch in the intentional sense. It is a structural consequence of the mass-market equilibrium. The provider's growth model requires a constant supply of credible, experienced people to deploy on new business. The only source of those people is existing client accounts. The result is a quiet, ongoing transfer of quality from established engagements to new ones — with the established client absorbing the cost.

The Engagement Spiral

The final structural factor is analyst engagement itself — and its gradual erosion is, again, a product of the market equilibrium rather than individual motivation.

Most offshore research analysts in the KPO sector are young, ambitious, and intellectually capable. They want to be challenged. They want to develop professionally. They want to see a career path that justifies staying rather than moving to a competitor, a bank's captive unit, or a fintech.

The mass-market model struggles to provide this. Training budgets are constrained by the turnover economics described above. Pay progression is incremental. The work, filtered through middle management and templated for consistency, often does not stretch the analyst's capabilities beyond their first six months.

The demand-side component

But the spiral has a demand-side component too. When an onshore team loses confidence in offshore output — because of a turnover event, a quality drop, or an audit scare — the natural response is to pull back. The client gives the offshore team less complex work. They reduce direct interaction. They over-check output rather than delegating judgement. Each of these responses is individually rational, but collectively they make the offshore analyst's role less challenging, less autonomous, and less professionally rewarding.

For a young analyst who joined expecting to develop real analytical skill, this is the point where disengagement sets in. They are no longer growing. The work is no longer interesting. The relationship with the client feels transactional rather than collaborative. The recruiter's call offering a 20% pay rise at a competitor becomes harder to ignore.

The onshore team sees the resulting departure as confirmation that offshore analysts are unreliable. The provider sees it as normal attrition. Neither side recognises that the disengagement was a predictable product of the structural conditions both created.

What the Retention Data Is Actually Measuring

The 2.2-year average tenure figure is an industry-wide metric, but it obscures important variation. Some providers retain analysts for four to five years. Others cycle through staff annually. The variation correlates strongly with three structural factors:

Sourcing selectivity

Providers that recruit from India's top 50 MBA programmes (out of approximately 1,300 accredited institutions — a selectivity rate comparable to Russell Group or Ivy League admissions) tend to attract analysts who are intellectually engaged by the work, not just using the role as a stepping stone. Selectivity at entry is a leading indicator of retention.

Training investment

Providers that invest in extended training — measured in months, not days — create analysts who are more capable, more autonomous, and more professionally satisfied. Analysts who exercise genuine judgement in their work are less likely to leave for a marginal salary increase. The paradox is that deeper training, which feels like a cost in year one, is the primary driver of retention in years two through five.

Communication model

Direct communication between offshore analysts and onshore teams — without a middle-management filter — accelerates the development of institutional knowledge and professional identity. An analyst who speaks directly to the portfolio manager, receives feedback in context, and sees their work used in client-facing output develops a sense of professional belonging that templated, filtered work environments cannot replicate.

Providers that score well on all three dimensions report average tenures of five to seven years. The structural difference is not incremental — it represents a fundamentally different economic model, where the provider invests more per analyst but retains them long enough to recover that investment many times over.

The Compounding Effect

The difference between 2.2-year and 6-plus-year average tenure is not a 3x improvement. It is a compounding advantage.

An analyst who stays for six years does not just accumulate six years of knowledge. They become a training resource for new joiners. They develop the ability to handle complex, judgement-heavy work without supervision. They build relationships with onshore team members that survive personnel changes on both sides. They become, in effect, an extension of the client's own team — not a contractor fulfilling a brief, but a colleague who understands the institution's risk appetite, analytical standards, and communication culture.

When that analyst handles an audit question, they do not recite a template. They explain their reasoning — because they have been developing that reasoning, under supervision, for years.

The institutions that experience persistent quality degradation in their offshore research models are not unlucky. They are experiencing the predictable output of a structural retention problem that most providers have no economic incentive to solve.

In stable offshore research models, the analyst owns the analytical work — financial models, monitoring notes, and research drafts — but quality control sits with a senior reviewer who checks methodology, assumptions, and consistency with the desk's analytical standards. Final investment or risk decisions remain entirely with the onshore team. This separation of ownership and oversight is what makes the model safe for the manager who sponsors it.

What Structurally Prevents This Failure

Quality degradation in offshore research is not inevitable. But preventing it requires a provider model that is structurally different from the industry norm in three specific ways:

  • Sourcing that prioritises intellectual calibre over cost minimisation

  • Training that is measured in months rather than days

  • A communication architecture that puts the analyst in direct contact with the onshore team from day one

The question for any institution evaluating or re-evaluating its offshore research model is not "are we getting good work today?" It is: "what is the average tenure of the analysts working on our account, and what structural factors are in place to maintain it?"

If the answer to the first question is below three years, the quality you are experiencing now is likely the peak.

For a framework of what a structurally different model looks like in practice — retention benchmarks, training depth, oversight standards — see our comparison at Upgrade to Frontline.

How to Tell Whether This Is Happening in Your Team

Research quality usually declines gradually, so it can be difficult to recognise the underlying cause. A few indicators tend to reveal whether retention dynamics are affecting an offshore analyst team.

Average analyst tenure

If the analysts working on your account typically stay less than three years, the engagement is likely operating inside the normal industry turnover cycle. The productive phase described above may be shorter than you assume.

Repeated onboarding cycles

If new analysts regularly require extensive explanation of portfolio context, internal terminology, or house style, institutional knowledge is not accumulating inside the team. Each cycle resets the clock.

Heavy checking by the onshore desk

When research output requires consistent rewriting or detailed verification, the offshore team may not yet have enough experience with the portfolio to work autonomously. If this persists beyond the first six months, the issue is structural.

Limited direct analyst interaction

If most communication passes through project managers rather than between analysts and desk professionals, the learning and integration that builds genuine analytical depth will be significantly slower — or may not happen at all.

None of these indicators alone prove that the model is failing. But when several appear together, the issue is almost always structural rather than individual.