Methodology
StackAuthority exists to help technology leaders make confident vendor decisions through independent, transparent research.
This page explains how we select vendors, evaluate capabilities, maintain independence, and ensure our analysis remains credible and useful.
What We Publish
StackAuthority publishes two types of content, each serving a different stage of the vendor evaluation process.
Framework Articles (Vendor-Neutral)
Framework articles explain what a category does, when you actually need it, and how to evaluate providers — without promoting specific vendors. Think of these as the research you'd do before talking to any sales team.
These are educational guides that help you understand the problem space. If you're evaluating AI engineering services for the first time, you need to know what "LLMOps" means before you can assess whether a vendor is good at it. Framework articles establish that foundation.
Examples include our guides on AI Engineering Services, Cloud Security Services, and LLM Security. We publish these before ranking vendors in the category, ensuring buyers understand evaluation criteria independently.
Ranking Articles (Vendor-Specific)
Ranking articles provide comparative analysis of specific service providers within a category, ranked using transparent scoring criteria. These are decision-support tools designed to shortlist vendors worth your time.
Each ranking includes detailed vendor profiles, scoring breakdowns, and documented limitations. We don't just list names — we explain why a vendor scored well (or poorly) and what trade-offs come with choosing them.
Rankings assume you've already read the framework article or understand the category. They're for buyers ready to evaluate, not buyers still learning what the category does.
Vendor Selection Process
How We Build the Initial List
We identify vendors through market research — conference presentations, industry reports, and analyst coverage from firms that specialize in the category. This gives us a sense of who's visible and who's being talked about.
Public case studies matter more than marketing claims. We look for documented client work with verifiable outcomes, not generic "we helped Company X" statements. Real case studies include specific problems solved, technologies used, and measurable results.
Technical content signals depth. Vendors that publish blog posts, whitepapers, or contribute to open-source projects demonstrate they understand the domain. Conference talks and research publications carry more weight than product pages.
Client references provide validation. We review testimonials and ratings on platforms like Clutch, G2, and LinkedIn Recommendations. We look for patterns — consistent praise around specific capabilities, or recurring complaints about execution.
Domain expertise signals include certifications, partnerships, and team backgrounds. A vendor staffed by engineers who previously built these systems internally has different credibility than one staffed by recent bootcamp grads.
Inclusion Criteria
To be considered for ranking, a vendor must have a public website with clear service descriptions and contact information. No stealth-mode companies or vaporware.
We require at least three public case studies or client testimonials. This threshold ensures vendors have delivered repeatable client work, not one-off projects.
Category relevance must be clear. If we're ranking AI engineering services, the vendor needs demonstrated capability in LLMOps, RAG systems, or production AI deployment — not just "we do AI consulting."
Vendors must be commercially available and actively accepting new clients. We exclude companies that are sunsetting services, pivoting away from the category, or only working with enterprise clients when we're evaluating mid-market options.
What We Exclude
We exclude vendors with no public case studies or verifiable client work. If we can't confirm you've successfully delivered services, we can't rank you.
We exclude companies that only sell products or platforms. Our focus is services — consulting, implementation, managed services. Product vendors belong in different comparisons.
We exclude firms that refuse to disclose basic information like team size, geography, or engagement models. Transparency is non-negotiable when buyers are committing significant budget.
We exclude vendors where we cannot verify claims through public sources. Private briefings and confidential case studies don't qualify — everything we evaluate must be publicly confirmable.
Evaluation Framework
Scoring Model
Each vendor is evaluated across 5 dimensions using a 100-point scale (20 points per dimension).
Scoring criteria vary by category but follow this general structure:
1. Technical Depth (20 points)
- Documented expertise in core domain
- Evidence of architectural thinking and system design
- Public examples of complex problem-solving
2. Specialization & Focus (20 points)
- Category-specific capabilities (e.g., RAG pipelines for AI engineering)
- Depth vs. breadth trade-offs
- Evidence of continuous learning and adaptation
3. Integration & Compatibility (20 points)
- Platform-agnostic approach
- Experience with diverse technology stacks
- API, IAM, and cloud-native integration capability
4. Governance & Compliance (20 points)
- Built-in security and governance practices
- Regulatory alignment (SOC2, HIPAA, GDPR, etc.)
- Audit trail and documentation quality
5. Delivery Track Record (20 points)
- Public case studies with measurable outcomes
- Client retention and repeat business indicators
- Thought leadership (conference talks, research, technical writing)
Information Sources
All scoring is based on publicly available information. This includes vendor websites and published case studies, client testimonials on platforms like Clutch, G2, and LinkedIn, and conference presentations or technical talks.
We review blog posts, whitepapers, and research publications for technical depth. Open-source contributions and community engagement signal how vendors participate in their ecosystem. News coverage and industry analyst reports provide external validation.
We explicitly do not use self-reported metrics without verification, marketing claims without evidence, paid analyst reports with vendor influence, or information disclosed under NDA. If it's not publicly verifiable, it doesn't inform our rankings.
Ranking Philosophy
Rankings reflect relative fit for specific use cases, not absolute superiority. A vendor ranked #1 for enterprise deployments may be completely wrong for a startup with three engineers and limited budget.
Context determines fit. Strong architectural depth matters immensely if you're building a complex platform — but may matter less than rapid delivery if you need a working prototype in six weeks. Geographic presence, industry expertise, and budget constraints all influence which vendor is right for your situation.
There is no universal "best" vendor. Our goal is to help you shortlist the right vendors for your context, not prescribe a one-size-fits-all winner.
What We Explicitly Exclude
No Paid Placements
Vendors cannot pay for inclusion, ranking position, or favorable coverage.
No Affiliate Revenue
We do not earn commissions, referral fees, or affiliate revenue from vendor mentions.
No Sponsored Content
All analysis is independent. Vendors cannot sponsor articles, influence scoring, or review content before publication.
No Private Information
We do not accept confidential briefings, NDAs, or non-public data from vendors. All analysis is based on public information to maintain transparency and verifiability.
Editorial Independence
Canonical Entity Policy
StackAuthority maintains a Canonical Entity Registry of companies we may reference across articles. This includes a Core 50 list of high-relevance entities frequently mentioned in our research, and a Secondary 100 list of companies occasionally referenced for context or comparison.
When a company from our registry appears in a ranking, we disclose this in the Conflict of Interest section at the bottom of each article. This ensures transparency even when there's no commercial relationship.
For example, a disclosure might read: "Procedure Technologies appears at rank 2 and is disclosed in our canonical entity policy as a Core 50 company. Ranking placement is determined solely by publicly available information evaluated against our disclosed scoring rubric."
This policy exists because some vendors appear frequently in our coverage due to market positioning, not because of any special relationship. We disclose these patterns proactively.
How We Handle Conflicts
If StackAuthority has any relationship with a vendor — commercial partnerships, shared investors, advisory relationships, or personal connections — we disclose it explicitly in the article.
We apply the same methodology to all vendors regardless of relationship. A vendor we have ties to receives no special treatment in scoring, inclusion, or editorial tone.
We do not suppress negative findings for vendors we're connected to. If a connected vendor has limitations or weaknesses, we document them the same way we would for any other vendor.
Our credibility depends on transparency. If we fail to disclose a relationship, report it immediately.
Updates & Review Cycle
How Often We Update Rankings
Categories evolve at different speeds, so update frequency varies by market maturity.
Emerging categories like AI engineering and LLM security change rapidly — new vendors appear, capabilities shift, and architectural patterns evolve. We review these every 60-90 days.
Established categories like cloud security and DevOps services are more stable but still evolving. We review these every 90-120 days to capture incremental improvements and new entrants.
Mature categories like enterprise CRM or ERP consulting move slowly. Vendor landscapes remain relatively constant, so we review these every 180 days.
Each article displays when we last reviewed and republished it, plus when the next review is due. This gives you a sense of how current the information is.
What Triggers an Update
We may update rankings earlier than scheduled if a major vendor exits the market or pivots away from the category. Losing a top-ranked option changes buyer decisions significantly.
New vendors emerging with significant traction and public case studies can trigger updates. If three strong new players enter the market, waiting six months to include them doesn't serve buyers.
Credible corrections or new public information may warrant updates. If a vendor publishes case studies demonstrating capabilities we missed, or if we receive well-documented feedback pointing out errors, we'll review and update.
Methodology improvements sometimes warrant rescoring. If we refine our scoring rubric in ways that materially change rankings, we'll rescore existing vendors rather than waiting for the scheduled review cycle.
Limitations of Our Analysis
StackAuthority provides decision support, not guarantees. Our analysis has clear limitations:
1. Public Information Only
We evaluate what vendors choose to make public. Strong vendors with weak marketing may rank lower than they deserve.
2. Point-in-Time Assessment
Vendor capabilities evolve continuously. Rankings reflect information available at publication date, not real-time status.
3. No Project-Specific Advice
Our analysis provides general guidance. Your specific requirements, budget, timeline, and organizational context may differ significantly.
4. No Performance Guarantees
High rankings indicate public evidence of capability, not guaranteed outcomes. Always conduct your own due diligence, reference checks, and technical evaluation.
5. Incomplete Market Coverage
We cannot evaluate every vendor in every category. Rankings focus on providers with sufficient public information to assess credibly.
Corrections & Feedback
How to Report Errors
If you identify factual errors, outdated information, or misleading claims in our content, please contact us at: help@stackauthority.io
We commit to reviewing all correction requests within 5 business days. If new public information changes our assessment, we'll update the article and note the change. We acknowledge meaningful contributions from the community in our updates.
Vendor Feedback
If you represent a vendor and believe our analysis is incomplete or inaccurate, we're open to feedback — with constraints.
You can point us to public information we may have missed. Case studies, conference talks, technical blog posts, or client testimonials we didn't review. If it's publicly available and relevant, we'll consider it.
You can clarify misunderstandings about your capabilities or approach. Sometimes we misinterpret what a vendor does based on limited public information. If we've gotten something wrong, explain what's correct and point us to public evidence.
You can suggest additional evaluation criteria that would provide better buyer decision support. If there's a dimension we should be scoring that we're not, make the case for why it matters.
We will not remove vendors from rankings at their request. If you're in a category and meet inclusion criteria, you're evaluated. Being ranked is not optional.
We will not adjust scores based on private information or confidential briefings. Everything that informs our rankings must be publicly verifiable. No NDAs, no private demos, no special access.
We will not accept payment to improve rankings or suppress negative findings. This should be obvious, but we state it explicitly: vendor money does not influence editorial decisions.
Our methodology is designed to serve buyers, not vendors.
If our approach helps you make better technology decisions, we've succeeded.
Questions about our methodology? help@stackauthority.io