AnswerLens

AnswerLens 是一个面向产品网站的 CLI-first AI 可发现性审计器。 面向 AI 可发现性的 CI。

语言: 简体中文 / English

版本化分发

发布说明应该像第二个公开入口,而不是一座更新日志坟场。

这一页由发布元数据编译而成,目的是让公开版本线保持机器可读、易于索引,并且对首次访客有用。

Apr 21, 2026

v0.3.2

AnswerLens is a CLI-first AI visibility auditor for product websites. CI for AI discoverability. ## 从这里开始 - 打开在线演示报告: https://yscjrh.github.io/ai-visibility-auditor/examples/static-good/index.html - 运行 60 秒 fixture 演示: https://github.com/YSCJRH/ai-visibility-auditor#run-the-60-secon...

Open GitHub release

Apr 21, 2026

v0.3.1

AnswerLens is a CLI-first AI visibility auditor for product websites. CI for AI discoverability. ## 从这里开始 - 打开在线演示报告: https://yscjrh.github.io/ai-visibility-auditor/examples/static-good/index.html - 运行 60 秒 fixture 演示: https://github.com/YSCJRH/ai-visibility-auditor#run-the-60-secon...

Open GitHub release

Apr 15, 2026

v0.3.0

AnswerLens is a CLI-first AI visibility auditor for product websites. CI for AI discoverability. ## 从这里开始 - 打开在线演示报告: https://yscjrh.github.io/ai-visibility-auditor/examples/static-good/index.html - 运行 60 秒 fixture 演示: https://github.com/YSCJRH/ai-visibility-auditor#run-the-60-secon...

Open GitHub release

Apr 15, 2026

v0.2.3

AnswerLens is a CLI-first AI visibility auditor for product websites. CI for AI discoverability. ## What ships - Field-level schema-text consistency and evidence density checks - Internal link context and discoverability rules for proof pages - Manual rank import with competitive position score (CPS) - Repeated-samp...

Open GitHub release

Apr 11, 2026

v0.2.0

## What ships AnswerLens `v0.2.0` is a CLI-first, report-driven AI visibility auditor for product websites. This release ships: - `audit` for deterministic AI-readiness checks against live sites or local fixtures - `eval` for prompt-pack benchmarking with OpenAI and Perplexity adapters - `manual-import` for scoring...

Open GitHub release

Apr 10, 2026

v0.1.0-alpha.1

Initial public alpha for AnswerLens. What is supported now: - CLI-first AI-readiness audit for product sites and fixtures - OpenAI-backed experimental eval with normalized citations and raw payload persistence - Markdown, JSON, and static HTML reports - Brief generation for FAQ, compare, and use-case gaps What this...

Open GitHub release