First, name the behavior. Call it agentwashing when a product labeled as agentic is merely orchestration, an LLM, and some scripts. The language you use internally will shape how seriously people treat the issue.
Second, demand evidence instead of demos. Polished demos are easy to fake, but architecture diagrams, evaluation methods, failure modes, and documented limitations are harder to counterfeit. If a vendor can’t clearly explain how their agents reason, plan, act, and recover, that should raise suspicion.
Third, tie vendor claims directly to measurable outcomes and capabilities. That means contracts and success criteria should be framed around quantifiable improvements in specific workflows, explicit autonomy levels, error rates, and governance boundaries, rather than vague goals like “autonomous AI.”



