Compare commits

..

25 Commits
main ... master

Author SHA1 Message Date
seaislee1209
7267c0bce5 Merge dev: v0.19.5~v0.19.6 (生成页慢速滚跳底根因修复 / CI retry 假绿色根因修复)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m48s
- v0.19.5 (85aa024): 生成页慢速往上滚仍跳到底部 — 双层 anchor 叠加根因
  CSS .contentArea 加 overflow-anchor: none 关掉浏览器自动 anchor +
  handleScroll 加 loadMoreInFlightRef 防 rAF 累加
- v0.19.6 (3f85825): CI deploy.yaml 6 处 retry 循环失败时正确 exit 1
  根除 "for ... && break; done" 模式吞错误导致的 "假绿色钩" 误判
  (v0.19.5 因此卡了 2 天没自动部署到测试服, 手动 patch 才上线)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 22:59:12 +08:00
seaislee1209
3f858257ea fix: v0.19.6 CI deploy.yaml retry 循环失败时正确 exit 1
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m4s
根因 deploy.yaml 6 处 retry 循环用 \`for ... do command && break; done\`
模式, bash for 循环本身的 exit code 永远是 0(只要循环正常结束),
即使所有 attempt 都失败。CI 看 step exit 0 -> 误判绿色。

实际事故 v0.19.5 (85aa024) push dev 后 Gitea Actions 显示绿色钩,
但测试服 K8s 上没有创建对应的 ReplicaSet, web pod 仍跑 v0.19.4。
查 K8s ReplicaSet 历史发现自 4-24 12:12 之后没有任何新 RS,
说明 deploy step 的 kubectl apply 没把新 image tag 提交到 etcd
(或某处中间静默失败被吞)。SWR 上镜像已经推上去, 是 deploy 这步
后续操作出了问题但 CI 没察觉。

修复 6 处 retry 全部加 \`ok=0/ok=1/break\` flag, 循环结束后 \`[ $ok
-eq 1 ] || exit 1\` 守卫, 真失败时 step exit 非 0 -> CI 红色:
  - backend build (3 次)
  - backend push (3 次)
  - web build (3 次)
  - web push (3 次)
  - kubectl download (3 次)
  - deploy to K3s (5 次, 含 kubectl apply / rollout restart)

以后再遇到部署失败, Gitea Actions 会真正显示红色, 不再"假绿色"
骗人。同时已有的 Report-failure-to-Log-Center step (if: failure())
会被触发, 飞书 / log-center 收到告警。

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 17:40:42 +08:00
seaislee1209
85aa0249b9 fix: v0.19.5 生成页慢速往上滚仍跳到底部 — 双层 anchor 叠加根因
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 14m10s
v0.19.4 只修了 useEffect 的 scrollToBottom 误触, 没修 handleScroll
里的 anchor 累加。用户实测: 鼠标滚轮慢速 / 慢拖滑动条往上翻仍会
跳到最底部, 但快速拽到顶不复现。

根因(双层 anchor 叠加)
1. 浏览器自动 scroll anchoring(默认 overflow-anchor: auto): 滚动
   容器头部插入内容时浏览器自动 scrollTop += diff, 保持视觉位置
2. handleScroll 里 .then 又手动 el.scrollTop += diff
=> 双倍 anchor, 总位移 = 2 * diff, 把 scrollTop 推到 max(底部)

为什么"快速拽不复现, 慢速复现":
浏览器 scroll anchoring 在用户主动 scroll 期间会暂时关闭。快速
拽到顶后立即 loadMore 完成, 浏览器把"用户刚释放"视作仍在 scroll
跳过自动 anchor, 只手动一次, 不叠加。慢速操作时 scroll event 间
有 100~300ms 静止间隔, 浏览器认为 scroll 已停, 自动 anchor 启动
+ 我们手动 anchor = 双倍。

修复(两道防线)
1. CSS: .contentArea 加 overflow-anchor: none, 彻底关掉浏览器自动
   anchor, 由代码统一管 — 这是根因修复
2. handleScroll: 加 loadMoreInFlightRef 防重入 flag, 慢速操作下
   多次进入 if 分支只 schedule 一次 anchor; rAF 完成后清 flag —
   兜底防御, 避免极端时序下 rAF 累加

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 14:17:01 +08:00
seaislee1209
965d155daf Merge dev: v0.19.1~v0.19.4 (素材组删除API / prompt@转图片N / 1080P单价显示 / 历史加载不跳底)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 12m30s
- v0.19.1 (f2dc8d4): 素材组删除改用一次性 DeleteAssetGroup API + 幂等
  清本地(NotFound.group_id 视为已清,修复火山后台手删遗留本地孤儿)
- v0.19.2 (13440f2): prompt 里 @素材名按火山规范转为「图片N/视频N/音频N」,
  修复多角色生成时人物颠倒的概率性现象 (火山模型只能理解位置指代,
  不能读懂 @文件名 / asset id)
- v0.19.3 (ecdb9cb): admin 系统设置页 1080P 两个单价输入框显示空的 UI
  bug, _settings_dict GET 漏返回字段; 计费本来就正确, 纯 UI 层修复
- v0.19.4 (10994df): 生成页往上翻加载历史不再跳到底部, useEffect 从
  比 tasks.length 改为比末尾 task id 变化, 区分"头部 prepend 历史"
  vs "尾部 push 新任务"
- 17fc3e5: 所有任务展示 / 列表 / CSV 导出补「分辨率」字段 (v0.19.0
  遗留的 1080P 可见性完善)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 00:32:05 +08:00
seaislee1209
10994df952 fix: v0.19.4 生成页往上翻加载历史时不再跳到底部
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m15s
现象: 用户往上翻看历史, loadMore 往 tasks 数组头部 prepend 老任务后,
页面自动跳到最底, 打断浏览。

根因 VideoGenerationPage.tsx useEffect 依赖 tasks.length, 只要数量
增加就 scrollTo(scrollHeight)。这个逻辑本意是"新任务生成 push 到
末尾时滚到底", 但区分不出"头部 prepend" vs "尾部 push", 于是
loadMore 也触发了滚底。handleScroll 里的 anchor(scrollTop += diff)
本来会做视觉保位, 但 useEffect 的 scrollTo smooth 抢先/盖过它。

修复 改成比末尾 task 的 id 是否变化 (prevLastIdRef) 而非 length
- 新任务 push 到末尾 → 末尾 id 变 → 滚到底 
- 头部加载历史 → 末尾 id 不变 → 保位置 
- 轮询更新任务属性(如生成完成) → 数组内容变但末尾 id 不变 → 不打扰 
- 删除某条任务(非末尾) → 末尾 id 不变 → 不滚 
- 只有"末尾多了新东西"这一种情况才滚底, 符合用户直觉

验证 vitest 71 failed 全部为 v0.18.x 以来的 preexisting 失败
(phase2/phase3 路径解析问题), stash 前后对比完全一致, 本次改动
零新增回归。

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 20:08:09 +08:00
seaislee1209
ecdb9cb471 fix: v0.19.3 admin settings GET 补返回 1080P 两个单价字段
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 11m45s
v0.19.0 做 1080P 时 QuotaConfig 加了 base_token_price_1080p 和
base_token_price_1080p_video 字段, serializer (PUT) 和计费逻辑
(_get_token_price) 都处理了, 但 _settings_dict (GET) 漏了两行,
导致管理后台设置页两个 1080P 单价输入框显示空白。

实际影响
- DB 值对 (51 / 31), 计费走 _get_token_price 直接读 DB, 计费一直正确
- 前端 SettingsPage fetchSettings 用 setSettings(data) 覆盖,
  GET 返回缺字段 -> state 变 undefined -> 输入框显示空
- 管理员点保存: undefined 被 JSON.stringify 省略 -> PUT body 不含
  这两字段 -> serializer validated_data 里没有 -> DB 未改
- 所以目前"巧合安全", 但风险: 管理员在空输入框填数字后清空,
  Number("") = 0 会覆盖 DB, 把单价刷成 0

修复
- backend/apps/generation/views.py _settings_dict() 加两行返回
  base_token_price_1080p / base_token_price_1080p_video
- 前端 GET 后 state 直接拿到 51 / 31, 输入框自动显示, 不依赖"巧合"

回归测试 (backend/tests/test_1080p_api.py)
- 新增 TestAdminSettingsResponse.test_get_returns_all_token_price_fields
  断言 GET /admin/settings 返回 6 个 token_price 字段全齐
- 失败消息明示: "缺字段会导致前端输入框显示空" 以防以后再漏

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 19:09:55 +08:00
seaislee1209
13440f2709 feat: v0.19.2 prompt 里 @素材名 按火山规范转为「图片N/视频N/音频N」
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m58s
火山 Seedance 模型只能理解"素材类型+序号"的指代(官方文档 FAQ Q3);
对文件名 / asset id / URL 类字符串一律读不懂,只能按 content 数组里
图片出现顺序瞎猜谁是谁,导致用户看到的"人物颠倒"概率性现象(典型
任务 cgt-20260422163517-4k8x6)。

改动
- backend/apps/generation/views.py:
  - 新增 _format_prompt_for_ark(prompt, label_placeholders) helper
    用 str.replace 避 regex 元字符崩溃, 按 label 长度降序防子串吞噬
  - video_generate_view references 循环同步维护 image_n/video_n/audio_n
    三个独立计数器 + label_to_placeholder 映射
  - 关键不变量: 任意时刻 counter == content_items 里该类型 *_url 已 push 数
    group 老路径 counter 照推但不登记 label + WARNING, 避免编号错位
  - 调 create_task 前构造 api_prompt 传给火山, DB.prompt 保留用户原文
    (带 @xxx.jpg) 以便 reEdit 重建带缩略图标签

测试覆盖 14 项 (airlabs-test MySQL 全绿)
- 单元 9 项: 基础替换 / 多类型独立计数 / 重复 @ / 子串冲突 / 正则元字符 /
  空 mapping / label 未 @ / 中文标点 / 空 label 跳过
- 集成 5 项: local 正常替换 / DB 原文保留 / group 老路径不换 + WARN /
  混用 local+group counter 对齐 (关键回归) / 图片音频独立计数

兼容性
- reEdit: DB 保留原文, PromptInput.rebuildMentionSpans 按 @label 正则
  仍能重建 span, 缩略图正常
- regenerate: 走同一 POST /api/v1/video/generate, 二次过转换
- Celery: 只 query 不重发, 不受影响

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 20:52:32 +08:00
seaislee1209
f2dc8d4713 feat: v0.19.1 素材组删除改用一次性 DeleteAssetGroup + 幂等清本地
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 11m59s
之前 DELETE /api/v1/assets/groups/<id> 的做法是循环调 DeleteAsset 再
删本地记录,现在火山开放了 DeleteAssetGroup(文档明确级联删除组内所有
Asset),改为一次调用,原子、快、无半成功。

变更
- backend/utils/assets_client.py:
  - ApiInfo 注册 DeleteAssetGroup
  - 新增 delete_asset_group(group_id)
- backend/apps/generation/views.py:
  - asset_group_detail_view DELETE 分支改为一次 delete_asset_group
  - 加幂等保护: 火山返回 NotFound.group_id 时继续清本地, 修复场景为
    用户在火山控制台手删素材组后本地 DB 出现孤儿, 再在前端点一次
    "删除素材组"即可清掉本地残留

测试 (airlabs-test)
- assets_client 4 项 PASS: 创建 → 删除 → 验证 gone → 重删返回
  NotFound.group_id → 纯假 id 同样 NotFound.group_id
- view 层 2 场景 PASS:
  - A 火山+本地都在 → 都清空
  - B 火山侧已手删, 本地还在 → 本地也能清

文档整理
- docs/API文档/about-Asset-素材组相关/ 新增 8 个火山最新 Asset API 文档
  (CreateAsset/Group, List*, Get*, Update*, Delete*), 原"使用指南"移入
  该目录归档

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 21:09:19 +08:00
seaislee1209
17fc3e5652 feat: 所有任务展示/列表/导出补「分辨率」字段 — 1080P 可见性完善
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m18s
v0.19.0 上线 1080P 后,多处展示只有"比例"没有"分辨率",用户和财务看到
费用差异却不知道是因为分辨率不同。本次补齐:

- VideoDetailModal 视频详情信息栏:模式·模型·时长·比例·【分辨率】·tokens·费用
- RecordDetailModal 消费记录详情弹窗:基本信息加「分辨率」字段
- RecordsPage 超管消费记录 CSV 导出:比例列后加「分辨率」列
- TeamRecordsPage 团管消费记录 CSV 导出:同上
- ProfilePage 个人中心记录列表:右侧费用旁加分辨率小标签(仅当有值)
- types/index.ts: AdminRecord 加 resolution?: Resolution 字段

后端 API 之前已返回 resolution(v0.19.0 的 5 处手工序列化已覆盖 L1751
admin_records / L1815 team_records / L2704 profile_records / L2837/2919
内容资产),前端只需接住展示即可。

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:08:06 +08:00
seaislee1209
fc61650092 Merge dev into master — v0.19.0 + v0.18.3 发布到生产
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 2m43s
v0.19.0 (39667ff): 1080P 分辨率支持 — 完整前后端 + 严格计费准确性 + 47 测试
v0.18.3 (dafdc89): 版权报错友好提示 + 图片删除即梦式连续重命名

详见版本管理.md 和各 commit 说明。测试服已完整验证通过:
- 47 自动化测试 (后端 28 + 前端 14 + 本地 E2E 5)
- 测试服 tudou 账号 E2E 8/8 通过
- 团队内容生成人员手动测试通过
2026-04-17 20:27:46 +08:00
seaislee1209
27bfa689ce test: 加测试服 E2E — 1080P 分辨率支持线上验证 (8/8 通过)
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m29s
针对 airflow-studio.test.airlabs.art 使用团管账号 tudou 真实验证:
- Sidebar「今日剩余次数」文案 + 无钻石图标
- Toolbar 默认 720P
- AirDrama 可切 1080P
- 1080P 下 Fast Dropdown 置灰(UI 不可达 Fast+1080P)
- Fast 下 1080P Dropdown 置灰(反向)
- ProfilePage 预警文案无「今日额度」老称谓
- API 拒绝 Fast+1080P 组合(400 invalid_resolution)
- API 拒绝 adaptive ratio(400)

已跑通,附带 resolution-1080p.spec.ts (本地版,admin 账号,5/5 通过)
和 backend tests (23 unit + 5 integration 全过)。

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 19:14:28 +08:00
seaislee1209
6b22e1fa3f fix: 用户端文案从「额度」改为具体单位「次数」— 消除点数概念混淆
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m5s
- Sidebar 左下角:去钻石图标(避免用户带入即梦/豆包的"点数"概念)+
  数据从 daily_seconds (秒数池残留) 改为 daily_generation_limit (次数);
  文案 "剩余额度"→"今日剩余次数"(必须写全,用户不猜);
  数字字号放大 14→18,tabular-nums 稳定排版
- ProfilePage 预警 banner: "今日额度已使用 X%"→"今日生成次数已用 X%";
  "今日额度已用完"→"今日生成次数已用完"
- generation.ts 错误映射: "额度不足,请联系管理员"→
  "今日生成次数或团队余额不足,请联系管理员"(两种可能都列出)

秒数池(daily_seconds_limit)是 v0.10.0 计费改次数+金额前的遗留概念,
这次把用户端可见的"额度"全部替换为明确的"生成次数/余额"单位,避免用户
把"额度"理解成即梦/豆包的"点数"来找客服问问题。

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 19:07:02 +08:00
seaislee1209
39667ff19c feat: v0.19.0 1080P 分辨率支持 — 完整前后端 + 严格计费准确性 + 47 测试
火山 Seedance 2.0 于 2026-04-16 上线 1080P 支持。本次实现前端 UI、
后端校验/计费、数据库迁移,并严格遵守三原则:
1. 禁止兜底/静默降级 — Fast+1080P 组合在 UI/store/serializer/view/计价
   五层防御,任一层穿透都 fail loud,不悄悄按 720P 扣费
2. 钱的计算绝对准确 — 前端预估公式与后端 estimate_tokens 完全一致
   `(输入时长+输出时长) × 宽 × 高 × fps / 1024`;实际扣费按火山返回
   total_tokens × 官方单价;预估端不维护最低 token 修正表
3. 不隐藏 bug — 无 `or '720p'` / `|| '720p'` 兜底;类型严格;异常暴露

## 后端(7 处 + 1 次迁移)

- models.py: QuotaConfig 加 base_token_price_1080p(51)/base_token_price_1080p_video(31);
  GenerationRecord.resolution 加 RESOLUTION_CHOICES 约束 + default='720p'
- migrations/0020: 含 RunPython data migration 回填历史 resolution='' → '720p'
- utils/billing.py:
  * RESOLUTION_MAP 加 1080P 六种宽高比(21:9 是 2206×946,不是 seedance 1.0 值)
  * get_resolution 去掉 tier 默认值,非法组合 raise KeyError 不静默降级
  * estimate_tokens 纯官方公式,加 input_video_duration 参数(公式完整)
- utils/airdrama_client.py: create_task 加 resolution 必填参数(无默认值)
- apps/generation/serializers.py:
  * VideoGenerateSerializer 加 resolution ChoiceField
  * aspect_ratio 改 ChoiceField 显式拒绝 adaptive
  * SystemSettingsSerializer 加 2 个 1080P 单价
- apps/generation/views.py:
  * _get_token_price 加 resolution 必填参数,Fast+1080P raise ValueError
  * _sum_video_duration 累加视频参考时长
  * video_generate_view 读 resolution、400 拒绝 Fast+1080P 组合、
    传给 get_resolution/estimate_tokens/_get_token_price/create_task/
    GenerationRecord.resolution(移除 L450 硬编码 '720p')
  * _settle_payment 按 record.resolution 取单价(1080P 结算按 1080P 价)
  * _serialize_task + 5 处手工序列化加 resolution 字段(无 `or '720p'`)
- apps/accounts/views.py: team 接口返回 token_price_1080p/_video

## 前端(10 处)

- types/index.ts: Resolution 类型;GenerationTask/BackendTask/Team/
  QuotaConfig/AssetVideo 加字段(全部必填,无 optional)
- store/inputBar.ts: resolution state;setModel/setResolution 双向拦截
  Fast+1080P 组合,toast 提示引导,不静默降级
- store/generation.ts: addTask/backendToFrontend/reEdit/regenerate 全链路
  携带 resolution;mapErrorMessage 改 '今日生成次数或团队余额不足'
- components/Toolbar.tsx:
  * 加分辨率选择器 Dropdown(位置:比例和时长之间)
  * modelItems/resolutionItems 双向 disabled(Fast 下 1080P 灰 / 1080P 下 Fast 灰)
  * estimatedTokens 对齐后端公式(含输入视频时长 + assetMentions 视频时长)
  * estimatedCost 按 resolution 选单价(Fast→fast_*、1080p→1080p_*、其他→基础)
  * tooltip 明示"实际费用以火山 API 返回的 token 数为准"
- components/Dropdown.tsx: 加 disabled 属性支持
- components/VideoDetailModal.tsx: 重新编辑恢复 resolution
- components/GenerationCard.tsx: 动态显示 task.resolution.toUpperCase()
- pages/SettingsPage.tsx: 加 2 个 1080P 单价输入框(独立分组)
- pages/AdminAssetsPage.tsx / TeamAssetsPage.tsx: 去 || '720p' 兜底
- lib/api.ts: videoApi.generate 参数 resolution 必填

## 测试(47 个用例)

### 后端(28 个)
- tests/test_1080p_billing.py(23): RESOLUTION_MAP 像素、estimate_tokens
  公式(含/不含输入视频、不做最低 token 修正)、_get_token_price 六种
  组合、Fast+1080P 抛异常、calculate_cost 对齐官方示例 4.97 / 12.39 元
- tests/test_1080p_api.py(5): video_generate_view 拒绝 Fast+1080P (400)
  + 拒绝 adaptive + 拒绝非法 resolution + 默认值兼容 + 合法组合通过

### 前端(19 个)
- test/unit/resolution1080p.test.ts(14): store 状态、双向拦截
  (1080P 下切 Fast 被阻止 model 不变、反向同样)、官方像素契约测试、
  价格示例对齐(720P 4.97 / 1080P 12.39)
- test/e2e/resolution-1080p.spec.ts(5): 真实浏览器验证默认 720P、
  Dropdown 双向置灰、tooltip 明示以火山为准

## 与官方文档对齐

- 参数:resolution (480p/720p/1080p 小写)、ratio、duration、generate_audio
- 像素:来自 docs/API文档/创建视频生成任务API.md Seedance 2.0 & 2.0 fast 列
- 单价:来自 docs/API文档/seedance模型价格.md (46/28/51/31/37/22)
- Fast 不支持 1080P:来自 docs/API文档/Seedance 2.0 1080P.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 19:06:45 +08:00
seaislee1209
624e12ae46 docs: v0.18.3 文档整理 + 新火山 API 文档 + changelog
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m38s
- 新增 4 个火山官方 API 文档(Seedance 2.0 1080P / seedance 模型价格 /
  seedance 2.0 系列教程 / 创建视频生成任务API)
- 归档 6 个过期文档到 docs/archive/(旧 Seedance API 邀测版 /
  旧 Assets API 邀测版 / celery 轮询修复 / design-review / prd / test-report)
- 新增 docs/todo/ 目录(提示词 AI 优化功能待办)
- changelog.md 补 v0.18.3 条目

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 18:03:49 +08:00
seaislee1209
dafdc8983f fix: v0.18.3 版权报错友好提示 + 图片删除即梦式连续重命名
Bug 1: 版权限制错误友好提示
- ERROR_MESSAGES 加 OutputVideoSensitiveContentDetected.PolicyViolation 映射
- 漫威等知名 IP 触发的版权拦截不再显示英文 raw error

Bug 2: 图片删除后同类型引用连续重命名(即梦逻辑)
- inputBar.ts::removeReference 重写:删除后同类型剩余引用按顺序 1/2/3 连续编号
- 用 DOMParser 同步更新 editorHtml 里对应 data-ref-id 的 @mention span textContent
- 缩略图区和提示词栏同步刷新,避免"两个图片2"命名冲突

验证
- 11 个 Vitest 单元测试覆盖图片/视频/音频删除、空 editorHtml、无 @mention、
  连续快速删除等边界场景
- 3 个 Playwright E2E 真实浏览器验证:上传 3 张图 → 删中间 → 再上传 → 编号不冲突

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 18:03:36 +08:00
seaislee1209
2281c64ee8 fix: 音频不能作为唯一参考素材 — 前端校验 + toast 提示
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 6m0s
Seedance API 不支持"纯音频"和"文本+音频"输入,必须搭配图片或视频。
- canSubmit() 校验同时检查 references 和 assetMentions
- Toolbar 点击禁用按钮时弹出 toast 提示原因

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 14:10:39 +08:00
zyc
41115faa16 add md
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m19s
2026-04-13 20:47:43 +08:00
seaislee1209
0b770340c8 fix: 修复资产页素材库引用不可查看 + 重新编辑素材泄漏
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m52s
1. AdminAssetsPage/TeamAssetsPage: asset:// 协议 URL 改用 thumb_url 显示缩略图
2. generation.ts reEdit/regenerate: 过滤 isAssetRef,素材库引用不混入 references 数组
3. PromptInput extractText: 实时同步 assetMentions store,删除 @标签后不再残留旧数据

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 18:33:08 +08:00
zyc
177a9c7dec feat: HTTP→HTTPS 自动跳转 — Traefik Middleware + CI/CD 部署补全
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m36s
- 新增 redirect-https-middleware.yaml (Traefik 301 永久重定向)
- ingress.yaml 添加 middleware annotation
- deploy.yaml 补充 cert-manager-issuer 和 redirect-middleware 的 kubectl apply

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:38:58 +08:00
zyc
a6a3928091 perf: kubectl 4s 超时 + 5 次重试,避免 K3s 内网抖动卡死部署
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m9s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:29:24 +08:00
zyc
ab1b00f94a feat: HTTP 自动跳转 HTTPS — Traefik Middleware + Ingress annotation
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:17:02 +08:00
seaislee1209
5972f45784 fix: 素材库引用缩略图烂图 + pollStatus 跨项目素材保护
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
- MentionTag: onError fallback,缩略图加载失败显示视频/图片占位图标
- createMentionSpan/VideoDetailModal: img onError 隐藏烂图
- buildReferenceSnapshots: 素材库引用用 thumb_url 做 previewUrl
- isAssetRef 标记防止视频缩略图被 <video> 渲染、重编辑防重复
- pollStatus: 已 active 的素材跳过远程查询,防止跨项目素材被误删

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:11:12 +08:00
seaislee1209
db1bbfa1d4 Merge branch 'dev' of https://gitea.airlabs.art/zyc/video-shuoshan into dev
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
2026-04-04 22:18:54 +08:00
seaislee1209
4b2dd9ef5e fix: 音频 ♫ 符号溢出到 prompt 文本 — 改用 CSS ::before 渲染
createMentionSpan 里音频的 ♫ 之前用 textContent 设置,
被 extractText() 的 el.textContent 读进了 prompt 纯文本,
导致 renderPromptWithMentions 匹配后留下额外的 ♫ 字符。

改用 CSS ::before content 渲染,不参与 textContent,
prompt 里不再有多余的 ♫。

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 22:17:07 +08:00
zyc
3bc8b78507 perf: docker cleanup 保留基础镜像缓存,只清 dangling 镜像
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m4s
2026-04-04 21:54:11 +08:00
63 changed files with 8116 additions and 166 deletions

View File

@ -49,45 +49,55 @@ jobs:
id: build_backend id: build_backend
run: | run: |
set -o pipefail set -o pipefail
ok=0
for attempt in 1 2 3; do for attempt in 1 2 3; do
echo "Build backend attempt $attempt/3..." echo "Build backend attempt $attempt/3..."
DOCKER_BUILDKIT=0 docker build \ DOCKER_BUILDKIT=0 docker build \
--tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:${{ env.IMAGE_TAG }} \ --tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:${{ env.IMAGE_TAG }} \
--tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:latest \ --tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:latest \
./backend 2>&1 | tee /tmp/build.log && break ./backend 2>&1 | tee /tmp/build.log && { ok=1; break; }
echo "Attempt $attempt failed, retrying in 10s..." && sleep 10 echo "Attempt $attempt failed, retrying in 10s..." && sleep 10
done done
[ $ok -eq 1 ] || { echo "ERROR: backend build failed after 3 attempts"; exit 1; }
ok=0
for attempt in 1 2 3; do for attempt in 1 2 3; do
docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:${{ env.IMAGE_TAG }} && \ docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:${{ env.IMAGE_TAG }} && \
docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:latest && break docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-backend:latest && { ok=1; break; }
echo "Push attempt $attempt failed, retrying in 10s..." && sleep 10 echo "Push attempt $attempt failed, retrying in 10s..." && sleep 10
done done
[ $ok -eq 1 ] || { echo "ERROR: backend push failed after 3 attempts"; exit 1; }
- name: Build and Push Web - name: Build and Push Web
id: build_web id: build_web
run: | run: |
set -o pipefail set -o pipefail
ok=0
for attempt in 1 2 3; do for attempt in 1 2 3; do
echo "Build web attempt $attempt/3..." echo "Build web attempt $attempt/3..."
DOCKER_BUILDKIT=0 docker build \ DOCKER_BUILDKIT=0 docker build \
--tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:${{ env.IMAGE_TAG }} \ --tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:${{ env.IMAGE_TAG }} \
--tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:latest \ --tag ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:latest \
./web 2>&1 | tee -a /tmp/build.log && break ./web 2>&1 | tee -a /tmp/build.log && { ok=1; break; }
echo "Attempt $attempt failed, retrying in 10s..." && sleep 10 echo "Attempt $attempt failed, retrying in 10s..." && sleep 10
done done
[ $ok -eq 1 ] || { echo "ERROR: web build failed after 3 attempts"; exit 1; }
ok=0
for attempt in 1 2 3; do for attempt in 1 2 3; do
docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:${{ env.IMAGE_TAG }} && \ docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:${{ env.IMAGE_TAG }} && \
docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:latest && break docker push ${{ env.CR_SERVER_ACTIVE }}/${{ env.CR_ORG }}/video-web:latest && { ok=1; break; }
echo "Push attempt $attempt failed, retrying in 10s..." && sleep 10 echo "Push attempt $attempt failed, retrying in 10s..." && sleep 10
done done
[ $ok -eq 1 ] || { echo "ERROR: web push failed after 3 attempts"; exit 1; }
- name: Setup Kubectl - name: Setup Kubectl
run: | run: |
if ! command -v kubectl &>/dev/null; then if ! command -v kubectl &>/dev/null; then
ok=0
for attempt in 1 2 3; do for attempt in 1 2 3; do
curl -LO "https://files.m.daocloud.io/dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl" && break curl -LO "https://files.m.daocloud.io/dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl" && { ok=1; break; }
echo "Download attempt $attempt failed, retrying in 5s..." && sleep 5 echo "Download attempt $attempt failed, retrying in 5s..." && sleep 5
done done
[ $ok -eq 1 ] || { echo "ERROR: kubectl download failed after 3 attempts"; exit 1; }
chmod +x kubectl && mv kubectl /usr/bin/kubectl chmod +x kubectl && mv kubectl /usr/bin/kubectl
fi fi
kubectl version --client kubectl version --client
@ -133,42 +143,48 @@ jobs:
sed -i "s|redis://zyc:Zyc188208@redis-shzlsczo52dft8mia.redis.ivolces.com:6379/0|${{ env.REDIS_URL }}|g" k8s/celery-deployment.yaml sed -i "s|redis://zyc:Zyc188208@redis-shzlsczo52dft8mia.redis.ivolces.com:6379/0|${{ env.REDIS_URL }}|g" k8s/celery-deployment.yaml
# All kubectl operations with retry (K3s 内网连接可能抖动) # All kubectl operations with retry (K3s 内网连接可能抖动)
for attempt in 1 2 3; do export KUBECTL_TIMEOUT="--request-timeout=4s"
echo "Deploy attempt $attempt/3..."
ok=0
for attempt in 1 2 3 4 5; do
echo "Deploy attempt $attempt/5..."
{ {
# Create/update image pull secret for CR # Create/update image pull secret for CR
kubectl create secret docker-registry cr-pull-secret \ kubectl $KUBECTL_TIMEOUT create secret docker-registry cr-pull-secret \
--docker-server="${{ env.CR_SERVER_ACTIVE }}" \ --docker-server="${{ env.CR_SERVER_ACTIVE }}" \
--docker-username="${{ env.CR_USERNAME_ACTIVE }}" \ --docker-username="${{ env.CR_USERNAME_ACTIVE }}" \
--docker-password="${{ env.CR_PASSWORD_ACTIVE }}" \ --docker-password="${{ env.CR_PASSWORD_ACTIVE }}" \
--dry-run=client -o yaml | kubectl apply -f - --dry-run=client -o yaml | kubectl $KUBECTL_TIMEOUT apply -f -
# Create/update secrets (业务密钥DB 已写在 yaml 里) # Create/update secrets (业务密钥DB 已写在 yaml 里)
kubectl create secret generic video-backend-secrets \ kubectl $KUBECTL_TIMEOUT create secret generic video-backend-secrets \
--from-literal=ARK_API_KEY='${{ secrets.ARK_API_KEY }}' \ --from-literal=ARK_API_KEY='${{ secrets.ARK_API_KEY }}' \
--from-literal=TOS_ACCESS_KEY='${{ secrets.TOS_ACCESS_KEY }}' \ --from-literal=TOS_ACCESS_KEY='${{ secrets.TOS_ACCESS_KEY }}' \
--from-literal=TOS_SECRET_KEY='${{ secrets.TOS_SECRET_KEY }}' \ --from-literal=TOS_SECRET_KEY='${{ secrets.TOS_SECRET_KEY }}' \
--from-literal=DJANGO_SECRET_KEY='${{ secrets.DJANGO_SECRET_KEY }}' \ --from-literal=DJANGO_SECRET_KEY='${{ secrets.DJANGO_SECRET_KEY }}' \
--from-literal=ALIYUN_SMS_ACCESS_KEY='${{ secrets.ALIYUN_SMS_ACCESS_KEY }}' \ --from-literal=ALIYUN_SMS_ACCESS_KEY='${{ secrets.ALIYUN_SMS_ACCESS_KEY }}' \
--from-literal=ALIYUN_SMS_ACCESS_SECRET='${{ secrets.ALIYUN_SMS_ACCESS_SECRET }}' \ --from-literal=ALIYUN_SMS_ACCESS_SECRET='${{ secrets.ALIYUN_SMS_ACCESS_SECRET }}' \
--dry-run=client -o yaml | kubectl apply -f - --dry-run=client -o yaml | kubectl $KUBECTL_TIMEOUT apply -f -
# Apply manifests # Apply manifests
kubectl apply -f k8s/backend-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/cert-manager-issuer.yaml
kubectl apply -f k8s/celery-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/redirect-https-middleware.yaml
kubectl apply -f k8s/web-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/ingress.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/celery-deployment.yaml
kubectl $KUBECTL_TIMEOUT apply -f k8s/web-deployment.yaml
kubectl $KUBECTL_TIMEOUT apply -f k8s/ingress.yaml
# Preserve real client IP # Preserve real client IP
kubectl patch svc traefik -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}' 2>/dev/null || true kubectl $KUBECTL_TIMEOUT patch svc traefik -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}' 2>/dev/null || true
kubectl rollout restart deployment/video-backend kubectl $KUBECTL_TIMEOUT rollout restart deployment/video-backend
kubectl rollout restart deployment/celery-worker kubectl $KUBECTL_TIMEOUT rollout restart deployment/celery-worker
kubectl rollout restart deployment/video-web kubectl $KUBECTL_TIMEOUT rollout restart deployment/video-web
} 2>&1 | tee /tmp/deploy.log && break } 2>&1 | tee /tmp/deploy.log && { ok=1; break; }
echo "Attempt $attempt failed, retrying in 10s..." echo "Attempt $attempt failed, retrying in 30s..."
sleep 10 sleep 30
done done
[ $ok -eq 1 ] || { echo "ERROR: deploy to K3s failed after 5 attempts — check /tmp/deploy.log"; exit 1; }
# ===== Log Center: failure reporting ===== # ===== Log Center: failure reporting =====
- name: Report failure to Log Center - name: Report failure to Log Center
@ -234,7 +250,7 @@ jobs:
if: always() if: always()
run: | run: |
docker container prune -f docker container prune -f
docker image prune -a -f docker image prune -f
docker builder prune -a -f docker builder prune -a -f
echo "Disk usage after cleanup:" echo "Disk usage after cleanup:"
df -h / | tail -1 df -h / | tail -1

View File

@ -244,6 +244,34 @@ jimeng-clone/
| `/admin/assets` | AdminAssetsPage | Admin | Content assets (team→member→video hierarchy) | | `/admin/assets` | AdminAssetsPage | Admin | Content assets (team→member→video hierarchy) |
| `/team/assets` | TeamAssetsPage | TeamAdmin | Team content assets (member→video hierarchy) | | `/team/assets` | TeamAssetsPage | TeamAdmin | Team content assets (member→video hierarchy) |
## AI Skills Reference
本项目用 **skill**(可复用方法论)+ **memory**(个人纪律)+ **hook**(系统强制)三层指引 AI 开发。按编辑场景对应加载:
| 编辑场景 | 应用的 skill / memory / hook |
|---|---|
| `backend/utils/seedance_client.py`、任何 Seedance API 对接 | **Skill** `seedance-api-integration` — 参数构造、token 计费、异步轮询、错误码映射、asset:// 引用、白标、5 起真实踩坑 |
| Django models 新增字段(`backend/apps/*/models.py`)或改 create() 调用 | **Memory** `feedback_mysql_default` / `feedback_mysql_explicit_fields` — MySQL 严格模式下 create() 必须显式传所有 CharField 值(已 1 起 2026-03-19 线上 500 事故) |
| 任何 `git push` 操作 | **Hook** `~/.claude/hooks/pre-git-push.sh` 会拦截;用户明确授权后用 `ALLOW_PUSH=1 git push` 重试 |
| 用户可见 label / 按钮 / 错误提示 | **Memory** `feedback_write_full_labels`(单位/类型/含义写全)+ `feedback_user_facing_docs_plain_language`(禁程序员术语)+ `feedback_seconds_unit`(只用秒,不换分钟/小时) |
| 任何 user-facing 字符串含模型名 / 错误提示 / 帮助文案 | **Memory** `feedback_no_seedance_branding` — 白标合同禁出现 "Seedance",用 "AirDrama" / "AirDrama Fast" |
| 内联编辑按钮(保存 / 取消) | **Memory** `feedback_inline_edit_style``whiteSpace: 'nowrap'` 必加,历史多次被挤成两行 |
| 对接火山 / 豆包 / Seedance 等第三方 API 添加新字段或错误码 | **Memory** `feedback_follow_official_api` — 严格按官方文档,不瞎编字段 / 错误码(历史:曾脑补 `.PolicyViolation` 子类型) |
| 生成页 / `@` mention / 素材库相关测试 | 用 `tudou`(团管)账号登录;`admin` 账号默认没 team进不了生成页 |
| 改完代码、宣称"完成"之前 | **Memory** `feedback_verify_before_deliver` + `feedback_verify_thoroughly` — 必须自己跑完整 user 路径验证关键改动,不让用户替我们背锅 |
| 商业级代码要求(并发、失败、资源复用) | **Memory** `feedback_commercial_grade` — 不准说"可以接受 / 后续优化",要么做到位要么明确标风险 |
| 改方案 / 改 UI 前发现截图或讨论 | **Memory** `feedback_no_rush_changes` + `feedback_simple_first` — 先提方案(最简单的先给)等用户说"改吧"再动手 |
**纪律分级:**
- **Skill** — 跨项目可复用的大块方法论(`~/.claude/skills/`
- **Memory** — 协作纪律 / 个人偏好(自动加载到 session context`~/.claude/projects/c--Airlabs-Project/memory/`
- **Hook** — 系统级硬拦截(`~/.claude/settings.json` + `~/.claude/hooks/`
- 项目特定架构约定直接写在本 CLAUDE.md 里Project Architecture / API Endpoints / Database Models 等章节)
**增删映射项时:** 新增跨项目规则写 memory、本项目特定规则写 CLAUDE.md 对应章节、跨项目技术方法论才做 skill参考 `feedback_no_skill_bloat`)。
---
## Incremental Development Guide ## Incremental Development Guide
### How to Add Features to This Project ### How to Add Features to This Project

View File

@ -241,6 +241,8 @@ def me_view(request):
'token_price_video': float(config.base_token_price_video) * markup_mult, 'token_price_video': float(config.base_token_price_video) * markup_mult,
'token_price_fast': float(config.base_token_price_fast) * markup_mult, 'token_price_fast': float(config.base_token_price_fast) * markup_mult,
'token_price_fast_video': float(config.base_token_price_fast_video) * markup_mult, 'token_price_fast_video': float(config.base_token_price_fast_video) * markup_mult,
'token_price_1080p': float(config.base_token_price_1080p) * markup_mult,
'token_price_1080p_video': float(config.base_token_price_1080p_video) * markup_mult,
'is_active': team.is_active, 'is_active': team.is_active,
} }
data['team_disabled'] = not team.is_active data['team_disabled'] = not team.is_active

View File

@ -0,0 +1,41 @@
# Generated by Django 4.2.29 on 2026-04-17 18:09
from django.db import migrations, models
def backfill_empty_resolution(apps, schema_editor):
"""将历史 resolution='' 的记录回填为 '720p'choices 约束前的旧数据)。"""
GenerationRecord = apps.get_model('generation', 'GenerationRecord')
GenerationRecord.objects.filter(resolution='').update(resolution='720p')
def reverse_backfill(apps, schema_editor):
"""回滚时不恢复为空字符串(历史数据无法精确识别)。"""
pass
class Migration(migrations.Migration):
dependencies = [
('generation', '0019_duration_nullable'),
]
operations = [
migrations.AddField(
model_name='quotaconfig',
name='base_token_price_1080p',
field=models.DecimalField(decimal_places=2, default=51, max_digits=10, verbose_name='1080P单价-不含视频(元/百万tokens)'),
),
migrations.AddField(
model_name='quotaconfig',
name='base_token_price_1080p_video',
field=models.DecimalField(decimal_places=2, default=31, max_digits=10, verbose_name='1080P单价-含视频(元/百万tokens)'),
),
# 先回填历史空值,再改 choices 约束,避免 MySQL 严格模式 IntegrityError
migrations.RunPython(backfill_empty_resolution, reverse_backfill),
migrations.AlterField(
model_name='generationrecord',
name='resolution',
field=models.CharField(choices=[('480p', '480P'), ('720p', '720P'), ('1080p', '1080P')], default='720p', max_length=10, verbose_name='分辨率'),
),
]

View File

@ -19,6 +19,11 @@ class GenerationRecord(models.Model):
('completed', '已完成'), ('completed', '已完成'),
('failed', '失败'), ('failed', '失败'),
] ]
RESOLUTION_CHOICES = [
('480p', '480P'),
('720p', '720P'),
('1080p', '1080P'),
]
user = models.ForeignKey( user = models.ForeignKey(
settings.AUTH_USER_MODEL, settings.AUTH_USER_MODEL,
@ -39,7 +44,7 @@ class GenerationRecord(models.Model):
cost_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='用户费用(元)') cost_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='用户费用(元)')
base_cost_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='平台成本(元)') base_cost_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='平台成本(元)')
frozen_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='冻结金额(元)') frozen_amount = models.DecimalField(max_digits=12, decimal_places=2, default=0, verbose_name='冻结金额(元)')
resolution = models.CharField(max_length=10, blank=True, default='', verbose_name='分辨率') resolution = models.CharField(max_length=10, choices=RESOLUTION_CHOICES, default='720p', verbose_name='分辨率')
status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='queued', verbose_name='状态') status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='queued', verbose_name='状态')
result_url = models.CharField(max_length=1000, blank=True, default='', verbose_name='生成结果URL') result_url = models.CharField(max_length=1000, blank=True, default='', verbose_name='生成结果URL')
thumbnail_url = models.CharField(max_length=1000, blank=True, default='', verbose_name='视频缩略图URL') thumbnail_url = models.CharField(max_length=1000, blank=True, default='', verbose_name='视频缩略图URL')
@ -97,6 +102,8 @@ class QuotaConfig(models.Model):
base_token_price_video = models.DecimalField(max_digits=10, decimal_places=2, default=28, verbose_name='基础token单价-含视频(元/百万tokens)') base_token_price_video = models.DecimalField(max_digits=10, decimal_places=2, default=28, verbose_name='基础token单价-含视频(元/百万tokens)')
base_token_price_fast = models.DecimalField(max_digits=10, decimal_places=2, default=37, verbose_name='Fast单价-不含视频(元/百万tokens)') base_token_price_fast = models.DecimalField(max_digits=10, decimal_places=2, default=37, verbose_name='Fast单价-不含视频(元/百万tokens)')
base_token_price_fast_video = models.DecimalField(max_digits=10, decimal_places=2, default=22, verbose_name='Fast单价-含视频(元/百万tokens)') base_token_price_fast_video = models.DecimalField(max_digits=10, decimal_places=2, default=22, verbose_name='Fast单价-含视频(元/百万tokens)')
base_token_price_1080p = models.DecimalField(max_digits=10, decimal_places=2, default=51, verbose_name='1080P单价-不含视频(元/百万tokens)')
base_token_price_1080p_video = models.DecimalField(max_digits=10, decimal_places=2, default=31, verbose_name='1080P单价-含视频(元/百万tokens)')
updated_at = models.DateTimeField(auto_now=True) updated_at = models.DateTimeField(auto_now=True)
class Meta: class Meta:

View File

@ -5,8 +5,11 @@ class VideoGenerateSerializer(serializers.Serializer):
prompt = serializers.CharField(required=False, allow_blank=True, default='') prompt = serializers.CharField(required=False, allow_blank=True, default='')
mode = serializers.ChoiceField(choices=['universal', 'keyframe']) mode = serializers.ChoiceField(choices=['universal', 'keyframe'])
model = serializers.ChoiceField(choices=['seedance_2.0', 'seedance_2.0_fast']) model = serializers.ChoiceField(choices=['seedance_2.0', 'seedance_2.0_fast'])
aspect_ratio = serializers.CharField(max_length=10) # 显式枚举拒绝 adaptive火山默认值— 估算/计费需要明确宽高
aspect_ratio = serializers.ChoiceField(choices=['16:9', '9:16', '4:3', '1:1', '3:4', '21:9'])
duration = serializers.IntegerField() duration = serializers.IntegerField()
# 1080p 仅 Seedance 2.0 支持Fast 不支持 — 上层 video_generate_view 会做 model/resolution 组合校验
resolution = serializers.ChoiceField(choices=['480p', '720p', '1080p'], required=False, default='720p')
references = serializers.ListField(child=serializers.DictField(), required=False, default=list) references = serializers.ListField(child=serializers.DictField(), required=False, default=list)
@ -40,6 +43,8 @@ class SystemSettingsSerializer(serializers.Serializer):
base_token_price_video = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False) base_token_price_video = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False)
base_token_price_fast = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False) base_token_price_fast = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False)
base_token_price_fast_video = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False) base_token_price_fast_video = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False)
base_token_price_1080p = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False)
base_token_price_1080p_video = serializers.DecimalField(max_digits=10, decimal_places=2, min_value=0, required=False)
announcement = serializers.CharField(required=False, allow_blank=True, default='') announcement = serializers.CharField(required=False, allow_blank=True, default='')
announcement_enabled = serializers.BooleanField(required=False, default=False) announcement_enabled = serializers.BooleanField(required=False, default=False)
max_desktop_sessions = serializers.IntegerField(min_value=1, required=False, default=1) max_desktop_sessions = serializers.IntegerField(min_value=1, required=False, default=1)

View File

@ -55,10 +55,59 @@ def _has_video_reference(references):
return any(ref.get('type') == 'video' for ref in references) return any(ref.get('type') == 'video' for ref in references)
def _get_token_price(config, model, has_video_ref): def _sum_video_duration(references):
"""根据模型和是否有视频参考选择单价。""" """累加所有视频类型参考素材的 duration用于 token 估算的输入时长。"""
if not references:
return 0.0
total = 0.0
for ref in references:
if ref.get('type') == 'video':
try:
total += float(ref.get('duration') or 0)
except (ValueError, TypeError):
continue
return total
def _format_prompt_for_ark(prompt, label_placeholders):
"""把 prompt 中的 @label 替换为火山可识别的「图片N/视频N/音频N」。
火山 Seedance 模型只能理解"素材类型+序号"格式的指代官方文档 FAQ Q3
文件名 / asset id / URL 对它都是不可理解的字符串会按位置概率性对齐
表现为"人物颠倒"此函数在发给火山之前做一次静默替换用户 prompt 原文
保留在 DB 便于 reEdit 回填带缩略图的标签
label_placeholders: [(label, placeholder), ...] 调用方需保证按 label 长度
降序防止""先于"碧碧"被替换的子串吞噬问题
str.replace 而非 re.sub避免 label 含正则元字符 "@[test].png"时崩溃
"""
result = prompt
for label, placeholder in label_placeholders:
if not label:
continue
result = result.replace(f'@{label}', placeholder)
return result
def _get_token_price(config, model, has_video_ref, resolution):
"""根据模型、是否含视频、分辨率选择单价。
约束与官方文档一致
- Seedance 2.0 Fast 不支持 1080p 此组合在 UI 层已阻止VideoGenerateSerializer
也会在 video_generate_view 中拒绝若仍进到这里表示前端约束失效或绕过前端
直接调 API fail loud绝不按 720p 价静默降级那是欺骗用户
- 1080p Seedance 2.0 使用独立单价51/31
- 480p 720p 共享同一单价
"""
if model == 'seedance_2.0_fast' and resolution == '1080p':
raise ValueError(
'Seedance 2.0 Fast 不支持 1080p — 前端应阻止此组合,不应进到计价函数'
)
if model == 'seedance_2.0_fast': if model == 'seedance_2.0_fast':
return config.base_token_price_fast_video if has_video_ref else config.base_token_price_fast return config.base_token_price_fast_video if has_video_ref else config.base_token_price_fast
if resolution == '1080p':
return config.base_token_price_1080p_video if has_video_ref else config.base_token_price_1080p
return config.base_token_price_video if has_video_ref else config.base_token_price return config.base_token_price_video if has_video_ref else config.base_token_price
@ -175,15 +224,26 @@ def video_generate_view(request):
mode = serializer.validated_data['mode'] mode = serializer.validated_data['mode']
model = serializer.validated_data['model'] model = serializer.validated_data['model']
aspect_ratio = serializer.validated_data['aspect_ratio'] aspect_ratio = serializer.validated_data['aspect_ratio']
# serializer 已设 default='720p' + choices 约束validated_data 必有合法值
resolution = serializer.validated_data['resolution']
search_mode = request.data.get('search_mode', 'off') search_mode = request.data.get('search_mode', 'off')
seed = _safe_int(request.data.get('seed', -1), -1) seed = _safe_int(request.data.get('seed', -1), -1)
# 1080P 仅 Seedance 2.0 支持Fast 不支持
if resolution == '1080p' and model == 'seedance_2.0_fast':
return Response({
'error': 'invalid_resolution',
'message': '1080P 仅支持 AirDrama 模型AirDrama Fast 不支持 1080P请切换模型或选择 720P',
}, status=status.HTTP_400_BAD_REQUEST)
# ── 预估 token 和费用 ── # ── 预估 token 和费用 ──
config = QuotaConfig.objects.get_or_create(pk=1)[0] config = QuotaConfig.objects.get_or_create(pk=1)[0]
w, h = get_resolution(aspect_ratio) references = request.data.get('references', [])
estimated_tokens = estimate_tokens(w, h, duration) w, h = get_resolution(aspect_ratio, resolution)
has_video_ref = _has_video_reference(request.data.get('references', [])) has_video_ref = _has_video_reference(references)
token_price = _get_token_price(config, model, has_video_ref) input_video_dur = _sum_video_duration(references) if has_video_ref else 0
estimated_tokens = estimate_tokens(w, h, duration, input_video_duration=input_video_dur)
token_price = _get_token_price(config, model, has_video_ref, resolution)
estimated_cost = calculate_cost(estimated_tokens, token_price, team.markup_percentage) estimated_cost = calculate_cost(estimated_tokens, token_price, team.markup_percentage)
# ── 所有额度检查在 transaction 内完成select_for_update 串行化同团队请求 ── # ── 所有额度检查在 transaction 内完成select_for_update 串行化同团队请求 ──
@ -289,6 +349,20 @@ def video_generate_view(request):
seen_urls = set() # 去重:同一个素材只引用一次 seen_urls = set() # 去重:同一个素材只引用一次
_asset_cache = {} # group_id → [(asset_url, asset_type), ...],避免同一素材组重复查询 _asset_cache = {} # group_id → [(asset_url, asset_type), ...],避免同一素材组重复查询
# 火山规范要求 prompt 里用「图片N/视频N/音频N」指代素材不能用文件名/asset id
# 循环同步维护各类型 counter 和 label→placeholder 映射,循环结束后一次性替换 prompt。
# 不变量:任意时刻 image_n / video_n / audio_n == content_items 里该类型 *_url 已 push 的个数。
label_to_placeholder: dict = {}
image_n = video_n = audio_n = 0
def _placeholder_for(asset_type):
"""读取当前 counter 值对应的 placeholder。调用前 counter 必须已递增。"""
if asset_type == 'Video':
return f'视频{video_n}'
if asset_type == 'Audio':
return f'音频{audio_n}'
return f'图片{image_n}'
from .models import Asset as AssetModel from .models import Asset as AssetModel
def _resolve_asset_group_all(gid, lbl): def _resolve_asset_group_all(gid, lbl):
@ -369,11 +443,16 @@ def video_generate_view(request):
aid = 'asset-' + aid[6:] aid = 'asset-' + aid[6:]
resolved_asset_url = f'asset://{aid}' resolved_asset_url = f'asset://{aid}'
if asset_obj.asset_type == 'Video': if asset_obj.asset_type == 'Video':
video_n += 1
content_items.append({'type': 'video_url', 'video_url': {'url': resolved_asset_url}, 'role': 'reference_video'}) content_items.append({'type': 'video_url', 'video_url': {'url': resolved_asset_url}, 'role': 'reference_video'})
elif asset_obj.asset_type == 'Audio': elif asset_obj.asset_type == 'Audio':
audio_n += 1
content_items.append({'type': 'audio_url', 'audio_url': {'url': resolved_asset_url}, 'role': 'reference_audio'}) content_items.append({'type': 'audio_url', 'audio_url': {'url': resolved_asset_url}, 'role': 'reference_audio'})
else: else:
image_n += 1
content_items.append({'type': 'image_url', 'image_url': {'url': resolved_asset_url}, 'role': 'reference_image'}) content_items.append({'type': 'image_url', 'image_url': {'url': resolved_asset_url}, 'role': 'reference_image'})
if label and label not in label_to_placeholder:
label_to_placeholder[label] = _placeholder_for(asset_obj.asset_type)
except AssetModel.DoesNotExist: except AssetModel.DoesNotExist:
return Response({ return Response({
'error': 'asset_not_found', 'error': 'asset_not_found',
@ -403,11 +482,17 @@ def video_generate_view(request):
}, status=status.HTTP_400_BAD_REQUEST) }, status=status.HTTP_400_BAD_REQUEST)
for asset_url, asset_type in asset_list: for asset_url, asset_type in asset_list:
if asset_type == 'Video': if asset_type == 'Video':
video_n += 1
content_items.append({'type': 'video_url', 'video_url': {'url': asset_url}, 'role': 'reference_video'}) content_items.append({'type': 'video_url', 'video_url': {'url': asset_url}, 'role': 'reference_video'})
elif asset_type == 'Audio': elif asset_type == 'Audio':
audio_n += 1
content_items.append({'type': 'audio_url', 'audio_url': {'url': asset_url}, 'role': 'reference_audio'}) content_items.append({'type': 'audio_url', 'audio_url': {'url': asset_url}, 'role': 'reference_audio'})
else: else:
image_n += 1
content_items.append({'type': 'image_url', 'image_url': {'url': asset_url}, 'role': 'reference_image'}) content_items.append({'type': 'image_url', 'image_url': {'url': asset_url}, 'role': 'reference_image'})
# 老兼容路径:一个 label 对应 N 张图,展开成"图片N"会改变语义,不登记 label。
# 但 counter 必须继续递增,否则后续 local 分支的编号会错位。
logger.warning('legacy asset://group-%s used (label=%s), skip @-replacement (counter advanced by %d)', group_id, label, len(asset_list))
except Exception as e: except Exception as e:
logger.warning('Failed to resolve asset group URL %s: %s', url, e) logger.warning('Failed to resolve asset group URL %s: %s', url, e)
return Response({ return Response({
@ -417,6 +502,7 @@ def video_generate_view(request):
continue # 素材组已展开为多个 content_items跳过下面的单项处理 continue # 素材组已展开为多个 content_items跳过下面的单项处理
if ref_type == 'image': if ref_type == 'image':
image_n += 1
item = {'type': 'image_url', 'image_url': {'url': url}} item = {'type': 'image_url', 'image_url': {'url': url}}
# API 文档要求:参考图模式下所有图片的 role 必须为 reference_image # API 文档要求:参考图模式下所有图片的 role 必须为 reference_image
if mode == 'universal': if mode == 'universal':
@ -425,15 +511,25 @@ def video_generate_view(request):
item['role'] = role item['role'] = role
content_items.append(item) content_items.append(item)
elif ref_type == 'video': elif ref_type == 'video':
video_n += 1
item = {'type': 'video_url', 'video_url': {'url': url}} item = {'type': 'video_url', 'video_url': {'url': url}}
if role: if role:
item['role'] = role item['role'] = role
content_items.append(item) content_items.append(item)
elif ref_type == 'audio': elif ref_type == 'audio':
audio_n += 1
item = {'type': 'audio_url', 'audio_url': {'url': url}} item = {'type': 'audio_url', 'audio_url': {'url': url}}
if role: if role:
item['role'] = role item['role'] = role
content_items.append(item) content_items.append(item)
else:
# 防御性:未知 ref_type脏数据或未来扩展→ 不推 content_item, 不登记
logger.warning('unknown ref_type=%s url=%s label=%s, skipped', ref_type, url, label)
continue
if label and label not in label_to_placeholder:
_type_map = {'image': 'Image', 'video': 'Video', 'audio': 'Audio'}
label_to_placeholder[label] = _placeholder_for(_type_map[ref_type])
logger.info('Video generate: %d content_items built (prompt=%s...)', len(content_items), prompt[:60]) logger.info('Video generate: %d content_items built (prompt=%s...)', len(content_items), prompt[:60])
@ -447,7 +543,7 @@ def video_generate_view(request):
duration=duration, duration=duration,
seconds_consumed=duration, seconds_consumed=duration,
frozen_amount=estimated_cost, frozen_amount=estimated_cost,
resolution='720p', resolution=resolution,
tokens_consumed=0, tokens_consumed=0,
cost_amount=0, cost_amount=0,
base_cost_amount=0, base_cost_amount=0,
@ -462,15 +558,21 @@ def video_generate_view(request):
# ── 调用 AirDrama API事务外避免持锁 ── # ── 调用 AirDrama API事务外避免持锁 ──
from django.conf import settings as django_settings from django.conf import settings as django_settings
if django_settings.SEEDANCE_ENABLED and django_settings.ARK_API_KEY: if django_settings.SEEDANCE_ENABLED and django_settings.ARK_API_KEY:
# 按火山规范把 @label 替换为「图片N/视频N/音频N」DB 的 record.prompt 仍保留原文
sorted_pairs = sorted(label_to_placeholder.items(), key=lambda kv: -len(kv[0]))
api_prompt = _format_prompt_for_ark(prompt, sorted_pairs)
logger.info('[ark-prompt] original=%s | converted=%s | mapping=%s',
prompt, api_prompt, label_to_placeholder)
try: try:
ark_response = create_task( ark_response = create_task(
prompt=prompt, prompt=api_prompt,
model=model, model=model,
content_items=content_items, content_items=content_items,
aspect_ratio=aspect_ratio, aspect_ratio=aspect_ratio,
duration=duration, duration=duration,
search_mode=search_mode, search_mode=search_mode,
seed=seed, seed=seed,
resolution=resolution,
) )
ark_task_id = ark_response.get('id', '') ark_task_id = ark_response.get('id', '')
record.ark_task_id = ark_task_id record.ark_task_id = ark_task_id
@ -550,7 +652,9 @@ def _settle_payment(record, total_tokens):
return return
config = QuotaConfig.objects.get_or_create(pk=1)[0] config = QuotaConfig.objects.get_or_create(pk=1)[0]
has_video_ref = _has_video_reference(record.reference_urls) has_video_ref = _has_video_reference(record.reference_urls)
token_price = _get_token_price(config, record.model, has_video_ref) # 按任务实际 resolution 取单价1080P 任务用 1080P 单价结算)
# record.resolution 有 model 层 default='720p' + choices 约束 + data migration 回填,永远不为空
token_price = _get_token_price(config, record.model, has_video_ref, record.resolution)
actual_cost = calculate_cost(total_tokens, token_price, team.markup_percentage) actual_cost = calculate_cost(total_tokens, token_price, team.markup_percentage)
base_cost = calculate_base_cost(total_tokens, token_price) base_cost = calculate_base_cost(total_tokens, token_price)
frozen = record.frozen_amount frozen = record.frozen_amount
@ -634,6 +738,7 @@ def _serialize_task(record):
'mode': record.mode, 'mode': record.mode,
'model': record.model, 'model': record.model,
'aspect_ratio': record.aspect_ratio, 'aspect_ratio': record.aspect_ratio,
'resolution': record.resolution,
'duration': record.duration, 'duration': record.duration,
'seconds_consumed': record.seconds_consumed, 'seconds_consumed': record.seconds_consumed,
'tokens_consumed': record.tokens_consumed, 'tokens_consumed': record.tokens_consumed,
@ -1705,6 +1810,7 @@ def admin_records_view(request):
'mode': r.mode, 'mode': r.mode,
'model': r.model, 'model': r.model,
'aspect_ratio': r.aspect_ratio, 'aspect_ratio': r.aspect_ratio,
'resolution': r.resolution,
'status': r.status, 'status': r.status,
'error_message': r.error_message or '', 'error_message': r.error_message or '',
'raw_error': r.raw_error or '', 'raw_error': r.raw_error or '',
@ -1768,6 +1874,7 @@ def team_records_view(request):
'mode': r.mode, 'mode': r.mode,
'model': r.model, 'model': r.model,
'aspect_ratio': r.aspect_ratio, 'aspect_ratio': r.aspect_ratio,
'resolution': r.resolution,
'status': r.status, 'status': r.status,
'error_message': r.error_message or '', 'error_message': r.error_message or '',
'raw_error': r.raw_error or '', 'raw_error': r.raw_error or '',
@ -1800,6 +1907,8 @@ def _settings_dict(config):
'base_token_price_video': float(config.base_token_price_video), 'base_token_price_video': float(config.base_token_price_video),
'base_token_price_fast': float(config.base_token_price_fast), 'base_token_price_fast': float(config.base_token_price_fast),
'base_token_price_fast_video': float(config.base_token_price_fast_video), 'base_token_price_fast_video': float(config.base_token_price_fast_video),
'base_token_price_1080p': float(config.base_token_price_1080p),
'base_token_price_1080p_video': float(config.base_token_price_1080p_video),
'announcement': config.announcement, 'announcement': config.announcement,
'announcement_enabled': config.announcement_enabled, 'announcement_enabled': config.announcement_enabled,
'max_desktop_sessions': config.max_desktop_sessions, 'max_desktop_sessions': config.max_desktop_sessions,
@ -2656,6 +2765,7 @@ def profile_records_view(request):
'mode': r.mode, 'mode': r.mode,
'model': r.model, 'model': r.model,
'aspect_ratio': r.aspect_ratio, 'aspect_ratio': r.aspect_ratio,
'resolution': r.resolution,
'status': r.status, 'status': r.status,
'error_message': r.error_message or '', 'error_message': r.error_message or '',
}) })
@ -2788,6 +2898,7 @@ def admin_assets_user_videos(request, user_id):
'duration': r.duration, 'duration': r.duration,
'seconds_consumed': r.seconds_consumed, 'seconds_consumed': r.seconds_consumed,
'aspect_ratio': r.aspect_ratio, 'aspect_ratio': r.aspect_ratio,
'resolution': r.resolution,
'reference_urls': r.reference_urls or [], 'reference_urls': r.reference_urls or [],
'created_at': r.created_at.isoformat(), 'created_at': r.created_at.isoformat(),
}) })
@ -2869,6 +2980,7 @@ def team_assets_member_videos(request, member_id):
'duration': r.duration, 'duration': r.duration,
'seconds_consumed': r.seconds_consumed, 'seconds_consumed': r.seconds_consumed,
'aspect_ratio': r.aspect_ratio, 'aspect_ratio': r.aspect_ratio,
'resolution': r.resolution,
'reference_urls': r.reference_urls or [], 'reference_urls': r.reference_urls or [],
'created_at': r.created_at.isoformat(), 'created_at': r.created_at.isoformat(),
}) })
@ -3156,15 +3268,20 @@ def asset_group_detail_view(request, group_id):
return Response({'error': '素材组不存在'}, status=status.HTTP_404_NOT_FOUND) return Response({'error': '素材组不存在'}, status=status.HTTP_404_NOT_FOUND)
if request.method == 'DELETE': if request.method == 'DELETE':
# Delete all remote assets in this group
from utils import assets_client from utils import assets_client
for asset in Asset.objects.filter(group=group): from utils.assets_client import AssetsAPIError
if asset.remote_asset_id: if group.remote_group_id:
try: try:
assets_client.delete_asset(asset.remote_asset_id) assets_client.delete_asset_group(group.remote_group_id)
except Exception as e: except AssetsAPIError as e:
logger.warning('Failed to delete remote asset %s: %s', asset.remote_asset_id, e) # 火山那边已经没了(比如被后台手动删了)就继续清本地,保证幂等
# Delete local records if e.code != 'NotFound.group_id':
logger.warning('Failed to delete remote group %s: %s', group.remote_group_id, e)
return Response(
{'error': 'assets_api_error', 'message': e.user_message},
status=status.HTTP_502_BAD_GATEWAY,
)
logger.info('Remote group %s already gone, cleaning local only', group.remote_group_id)
Asset.objects.filter(group=group).delete() Asset.objects.filter(group=group).delete()
group.delete() group.delete()
return Response({'message': '素材组已删除'}) return Response({'message': '素材组已删除'})
@ -3446,7 +3563,8 @@ def asset_poll_status_view(request, asset_id):
except Asset.DoesNotExist: except Asset.DoesNotExist:
return Response({'error': '素材不存在'}, status=status.HTTP_404_NOT_FOUND) return Response({'error': '素材不存在'}, status=status.HTTP_404_NOT_FOUND)
if asset.remote_asset_id: # 已经 active 且有 URL 的素材跳过远程查询(避免跨项目素材被误删)
if asset.remote_asset_id and asset.status != 'active':
from utils import assets_client from utils import assets_client
from utils.assets_client import AssetsAPIError from utils.assets_client import AssetsAPIError
try: try:

View File

@ -0,0 +1,164 @@
"""
1080P API 集成测试 验证 video_generate_view 的入口校验
"""
import os
import sys
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
django.setup()
import unittest
from django.test import TestCase
from django.contrib.auth import get_user_model
from rest_framework.test import APIClient
from apps.accounts.models import Team
from apps.generation.models import QuotaConfig
User = get_user_model()
class TestVideoGenerateResolution(TestCase):
"""video_generate_view 的分辨率+模型组合校验。"""
def setUp(self):
# 初始化 QuotaConfig
QuotaConfig.objects.get_or_create(pk=1)
# 建测试 team + user
self.team = Team.objects.create(
name='test-1080p',
is_active=True,
monthly_spending_limit=1000,
markup_percentage=0,
balance=1000,
frozen_amount=0,
)
self.user = User.objects.create_user(
username='test_1080p_user',
email='test1080p@example.com',
password='testpass123',
team=self.team,
spending_limit=-1,
daily_generation_limit=-1,
monthly_generation_limit=-1,
)
self.client = APIClient()
self.client.force_authenticate(user=self.user)
def test_reject_fast_plus_1080p(self):
"""原则 1Fast + 1080P 组合必须 400 拒绝,不能静默降级。"""
resp = self.client.post('/api/v1/video/generate', {
'prompt': '测试',
'mode': 'universal',
'model': 'seedance_2.0_fast',
'aspect_ratio': '16:9',
'duration': 5,
'resolution': '1080p',
'references': [],
}, format='json')
self.assertEqual(resp.status_code, 400)
body = resp.json()
self.assertEqual(body.get('error'), 'invalid_resolution')
# 提示信息要明确告知用户原因
self.assertIn('1080P', body.get('message', ''))
self.assertIn('Fast', body.get('message', ''))
def test_reject_adaptive_ratio(self):
"""原则 1adaptive 不在 6 选 1 白名单,拒绝。"""
resp = self.client.post('/api/v1/video/generate', {
'prompt': '测试',
'mode': 'universal',
'model': 'seedance_2.0',
'aspect_ratio': 'adaptive',
'duration': 5,
'resolution': '720p',
'references': [],
}, format='json')
self.assertEqual(resp.status_code, 400)
# serializer 错误aspect_ratio 不在 choices
self.assertIn('aspect_ratio', str(resp.content))
def test_reject_invalid_resolution(self):
"""resolution 不在 480p/720p/1080p 白名单,拒绝。"""
resp = self.client.post('/api/v1/video/generate', {
'prompt': '测试',
'mode': 'universal',
'model': 'seedance_2.0',
'aspect_ratio': '16:9',
'duration': 5,
'resolution': '4K',
'references': [],
}, format='json')
self.assertEqual(resp.status_code, 400)
def test_resolution_default_720p_when_missing(self):
"""旧客户端不传 resolution 字段时serializer default='720p' 生效。"""
# 不传 resolution兼容旧客户端
resp = self.client.post('/api/v1/video/generate', {
'prompt': '测试',
'mode': 'universal',
'model': 'seedance_2.0',
'aspect_ratio': '16:9',
'duration': 5,
'references': [],
}, format='json')
# serializer 应该接受default='720p');可能因火山 API 未开通等其他原因失败,
# 但不该是 resolution 相关的 400 错误
if resp.status_code == 400:
body = resp.json()
self.assertNotEqual(body.get('error'), 'invalid_resolution')
def test_accept_valid_1080p_airdrama(self):
"""原则AirDrama + 1080P 组合合法,不被 400 拒绝。"""
resp = self.client.post('/api/v1/video/generate', {
'prompt': '测试',
'mode': 'universal',
'model': 'seedance_2.0',
'aspect_ratio': '16:9',
'duration': 5,
'resolution': '1080p',
'references': [],
}, format='json')
# 不应该因为分辨率被 400可能因余额/API 未开通等其他原因失败)
if resp.status_code == 400:
body = resp.json()
self.assertNotEqual(body.get('error'), 'invalid_resolution')
class TestAdminSettingsResponse(TestCase):
"""GET /api/v1/admin/settings 必须返回所有 token_price 字段,
以防 v0.19.0 那种"字段在 serializer 里加了、但 _settings_dict 漏了"的回归"""
def setUp(self):
QuotaConfig.objects.get_or_create(pk=1)
self.admin = User.objects.create_user(
username='test_admin_settings',
email='test_admin_settings@example.com',
password='testpass123',
is_staff=True,
is_superuser=True,
)
self.client = APIClient()
self.client.force_authenticate(user=self.admin)
def test_get_returns_all_token_price_fields(self):
"""GET 返回 4 档单价(全部分辨率 + 是否含视频),缺一不可 — 缺字段会导致前端输入框显示空。"""
resp = self.client.get('/api/v1/admin/settings')
self.assertEqual(resp.status_code, 200)
body = resp.json()
for field in (
'base_token_price',
'base_token_price_video',
'base_token_price_fast',
'base_token_price_fast_video',
'base_token_price_1080p',
'base_token_price_1080p_video',
):
self.assertIn(field, body, f'GET /admin/settings response missing {field!r} — 前端这个输入框会显示空')
self.assertIsInstance(body[field], (int, float), f'{field} 应该是数字类型')
if __name__ == '__main__':
unittest.main(verbosity=2)

View File

@ -0,0 +1,208 @@
"""
1080P 分辨率支持的计费逻辑测试 严格对齐用户三原则
1. 不兜底/静默降级
2. 钱的计算绝对准确纯官方公式
3. 不隐藏 bug非法组合 fail loud
运行方式
cd backend && source venv/Scripts/activate && python -m pytest tests/test_1080p_billing.py -v
Django test runner
python manage.py test tests.test_1080p_billing
"""
import os
import sys
import django
# Django setup
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
django.setup()
import unittest
from utils.billing import (
RESOLUTION_MAP,
get_resolution,
estimate_tokens,
calculate_cost,
calculate_base_cost,
)
class TestResolutionMap(unittest.TestCase):
"""验证 RESOLUTION_MAP 的 18 个组合像素值与官方文档一致。"""
def test_1080p_pixels(self):
# 来自 docs/API文档/创建视频生成任务API.md Seedance 2.0 & 2.0 fast 列
self.assertEqual(get_resolution('16:9', '1080p'), (1920, 1080))
self.assertEqual(get_resolution('9:16', '1080p'), (1080, 1920))
self.assertEqual(get_resolution('4:3', '1080p'), (1664, 1248))
self.assertEqual(get_resolution('1:1', '1080p'), (1440, 1440))
self.assertEqual(get_resolution('3:4', '1080p'), (1248, 1664))
# 21:9 特别注意:是 2206×946不是 seedance 1.0 的 2176×928
self.assertEqual(get_resolution('21:9', '1080p'), (2206, 946))
def test_720p_pixels(self):
self.assertEqual(get_resolution('16:9', '720p'), (1280, 720))
self.assertEqual(get_resolution('21:9', '720p'), (1470, 630))
def test_480p_pixels(self):
self.assertEqual(get_resolution('16:9', '480p'), (864, 496))
self.assertEqual(get_resolution('21:9', '480p'), (992, 432))
def test_invalid_combo_raises(self):
"""原则 1非法组合必须 fail loud不静默降级。"""
with self.assertRaises(KeyError):
get_resolution('adaptive', '720p') # adaptive 不在 map
with self.assertRaises(KeyError):
get_resolution('16:9', '4K') # 不存在的 tier
with self.assertRaises(KeyError):
get_resolution('unknown', 'unknown')
def test_tier_required(self):
"""tier 参数必填,不允许默认 720p 静默降级。"""
with self.assertRaises(TypeError):
get_resolution('16:9') # type: ignore - 故意漏参数
class TestEstimateTokens(unittest.TestCase):
"""
严格对齐官方公式`(输入视频时长+输出时长) × × × 帧率 / 1024`
预估端不做最低 token 修正那是火山计费侧逻辑
"""
def test_formula_no_input_video(self):
# 720P 16:9 (1280×720), 5s 输出, 24fps
# 1280 × 720 × 24 × 5 / 1024 = 108000
self.assertEqual(estimate_tokens(1280, 720, 5), 108000)
def test_formula_with_input_video(self):
# 720P 16:9, 5s 输出 + 5s 输入 = 10s 总时长
# 1280 × 720 × 24 × 10 / 1024 = 216000
self.assertEqual(estimate_tokens(1280, 720, 5, input_video_duration=5), 216000)
def test_1080p_formula(self):
# 1080P 16:9 (1920×1080), 5s 输出, 无输入视频
# 1920 × 1080 × 24 × 5 / 1024 = 243000
self.assertEqual(estimate_tokens(1920, 1080, 5), 243000)
def test_1080p_with_input_video(self):
# 1080P 16:9, 5s 输出 + 2s 输入 = 7s
# 1920 × 1080 × 24 × 7 / 1024 = 340200
self.assertEqual(estimate_tokens(1920, 1080, 5, input_video_duration=2), 340200)
def test_no_silent_min_token_adjustment(self):
"""原则 2预估端严格按公式不做最低 token 修正。
火山文档说 1080p 5s 输入含视频最低 437400 tokens但那是火山计费侧的事
我们预估就老老实实按公式算 (5s+2s)×1920×1080×24/1024 = 340200不擅自拉高
"""
# 1080p 5s 输出 + 2s 输入 = 7s 总时长
# 公式值 340200官方最低 437400
# 我们应该返回公式值,不主动调到最低值
result = estimate_tokens(1920, 1080, 5, input_video_duration=2)
self.assertEqual(result, 340200, "预估端不应主动修正到火山最低 token")
def test_float_input_duration(self):
"""输入视频时长可能是浮点数(前端 getMediaInfo 读取),要正确累加。"""
# 720P 16:9, 5s 输出 + 3.5s 输入 = 8.5s
# 1280 × 720 × 24 × 8.5 / 1024 = 183600
self.assertEqual(estimate_tokens(1280, 720, 5, input_video_duration=3.5), 183600)
class TestGetTokenPrice(unittest.TestCase):
"""验证单价选择逻辑 — 4 种模型×视频组合 + 1080p 独立单价 + Fast+1080P fail loud。"""
def setUp(self):
# Mock QuotaConfig — 用官方文档默认值
from types import SimpleNamespace
from decimal import Decimal
self.config = SimpleNamespace(
base_token_price=Decimal('46'),
base_token_price_video=Decimal('28'),
base_token_price_fast=Decimal('37'),
base_token_price_fast_video=Decimal('22'),
base_token_price_1080p=Decimal('51'),
base_token_price_1080p_video=Decimal('31'),
)
from apps.generation.views import _get_token_price
self._get_token_price = _get_token_price
def test_seedance_2_0_720p_no_video(self):
"""AirDrama 720P 不含视频 = 46 元/百万 tokens."""
price = self._get_token_price(self.config, 'seedance_2.0', False, '720p')
self.assertEqual(price, 46)
def test_seedance_2_0_720p_with_video(self):
"""AirDrama 720P 含视频 = 28."""
price = self._get_token_price(self.config, 'seedance_2.0', True, '720p')
self.assertEqual(price, 28)
def test_seedance_2_0_480p_same_as_720p(self):
"""480p 和 720p 共享同一单价(官方价格一致)。"""
price_480 = self._get_token_price(self.config, 'seedance_2.0', False, '480p')
price_720 = self._get_token_price(self.config, 'seedance_2.0', False, '720p')
self.assertEqual(price_480, price_720)
def test_seedance_2_0_1080p_no_video(self):
"""AirDrama 1080P 不含视频 = 51独立单价不是 720p 的 46."""
price = self._get_token_price(self.config, 'seedance_2.0', False, '1080p')
self.assertEqual(price, 51)
def test_seedance_2_0_1080p_with_video(self):
"""AirDrama 1080P 含视频 = 31独立单价不是 720p 的 28."""
price = self._get_token_price(self.config, 'seedance_2.0', True, '1080p')
self.assertEqual(price, 31)
def test_fast_720p_no_video(self):
"""Fast 720P 不含视频 = 37."""
price = self._get_token_price(self.config, 'seedance_2.0_fast', False, '720p')
self.assertEqual(price, 37)
def test_fast_720p_with_video(self):
"""Fast 720P 含视频 = 22."""
price = self._get_token_price(self.config, 'seedance_2.0_fast', True, '720p')
self.assertEqual(price, 22)
def test_fast_480p_uses_fast_price(self):
"""Fast 不分 480p/720p都用 fast 单价。"""
price_480 = self._get_token_price(self.config, 'seedance_2.0_fast', False, '480p')
price_720 = self._get_token_price(self.config, 'seedance_2.0_fast', False, '720p')
self.assertEqual(price_480, price_720)
self.assertEqual(price_480, 37)
def test_fast_1080p_raises_value_error(self):
"""原则 1 + 3Fast + 1080P 必须 fail loud不能静默按 720p 价(欺骗用户)。"""
with self.assertRaises(ValueError) as ctx:
self._get_token_price(self.config, 'seedance_2.0_fast', False, '1080p')
self.assertIn('1080p', str(ctx.exception).lower())
with self.assertRaises(ValueError):
self._get_token_price(self.config, 'seedance_2.0_fast', True, '1080p')
class TestCalculateCost(unittest.TestCase):
"""验证扣费金额计算 = tokens × 单价 × (1 + 加价%),精确到分."""
def test_720p_cost_matches_official_example(self):
"""官方示例720P 5s 16:9 = 4.97 元(无加价)."""
# 720p 5s 公式值 108000 tokens
tokens = estimate_tokens(1280, 720, 5)
# 46 元/百万 × 108000 / 1000000 = 4.968 ≈ 4.97
cost = calculate_cost(tokens, 46, 0)
self.assertEqual(str(cost), '4.97')
def test_1080p_no_video_cost(self):
"""1080P 5s 16:9 不含视频 = 1920×1080×24×5/1024 × 51 / 1000000 = 12.393 ≈ 12.39 元."""
tokens = estimate_tokens(1920, 1080, 5)
cost = calculate_cost(tokens, 51, 0)
self.assertEqual(str(cost), '12.39')
def test_markup_applied(self):
"""团队加价 20% 的情况。"""
tokens = estimate_tokens(1280, 720, 5) # 108000
cost = calculate_cost(tokens, 46, 20)
# 4.968 × 1.2 = 5.9616 → 5.96
self.assertEqual(str(cost), '5.96')
if __name__ == '__main__':
unittest.main(verbosity=2)

View File

@ -0,0 +1,239 @@
"""
测试 prompt 转火山图片N/视频N/音频N格式 v0.19.1+
火山模型无法理解文件名/asset id必须用素材类型+序号指代官方文档 FAQ Q3
本测试文件覆盖
单元测试纯函数 _format_prompt_for_ark
集成测试video_generate_view 端到端 counter 对齐关键回归
"""
import os
import sys
import django
from unittest import mock
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
django.setup()
import unittest
from django.test import TestCase, override_settings
from django.contrib.auth import get_user_model
from rest_framework.test import APIClient
from apps.accounts.models import Team
from apps.generation.models import QuotaConfig, AssetGroup, Asset, GenerationRecord
from apps.generation.views import _format_prompt_for_ark
User = get_user_model()
# ────────────────────────────────────────────────
# 单元测试:纯函数 _format_prompt_for_ark
# ────────────────────────────────────────────────
class TestFormatPromptForArk(unittest.TestCase):
"""覆盖各种 label 替换场景(字符串级别)。"""
def test_basic_replacement(self):
"""@label 替换为 placeholder。"""
out = _format_prompt_for_ark('@碧碧.jpg 是碧儿', [('碧碧.jpg', '图片1')])
self.assertEqual(out, '图片1 是碧儿')
def test_multi_type_independent_counters(self):
"""图片/视频/音频各自独立编号。"""
pairs = [
('img1.jpg', '图片1'),
('video1.mp4', '视频1'),
('audio1.mp3', '音频1'),
]
out = _format_prompt_for_ark('用 @img1.jpg @video1.mp4 和 @audio1.mp3', pairs)
self.assertEqual(out, '用 图片1 视频1 和 音频1')
def test_same_label_multiple_at_signs(self):
"""同一 label 在 prompt 里 @ 多次,全部替换成同一 placeholderstr.replace 全局)。"""
out = _format_prompt_for_ark('@foo 然后 @foo 再 @foo', [('foo', '图片1')])
self.assertEqual(out, '图片1 然后 图片1 再 图片1')
def test_substring_conflict_long_first(self):
"""当存在子串关系('''碧碧' 的子串),长 label 必须先替换。"""
# 模拟调用方已经按长度降序传入
pairs = [('碧碧', '图片2'), ('', '图片1')]
out = _format_prompt_for_ark('@碧碧 和 @碧 是姐妹', pairs)
self.assertEqual(out, '图片2 和 图片1 是姐妹')
def test_label_with_regex_metachars(self):
"""label 含正则元字符([ ] + . * ? 等str.replace 不当正则处理。"""
pairs = [
('[test].png', '图片1'),
('a+b.png', '图片2'),
('a.b*.png', '图片3'),
]
prompt = '@[test].png 和 @a+b.png 还有 @a.b*.png'
out = _format_prompt_for_ark(prompt, pairs)
self.assertEqual(out, '图片1 和 图片2 还有 图片3')
def test_empty_mapping(self):
"""无 @ 素材的 prompt 原样返回。"""
out = _format_prompt_for_ark('今天天气真好', [])
self.assertEqual(out, '今天天气真好')
def test_label_in_mapping_not_in_prompt(self):
"""mapping 里有 label 但 prompt 里没 @ → 不动。"""
out = _format_prompt_for_ark('一段普通文字', [('foo.jpg', '图片1')])
self.assertEqual(out, '一段普通文字')
def test_chinese_punctuation_around_label(self):
"""中文标点不影响替换。"""
out = _format_prompt_for_ark('@碧碧.jpg"你好。"', [('碧碧.jpg', '图片1')])
self.assertEqual(out, '图片1"你好。"')
def test_empty_label_skipped(self):
"""label 为空字符串时跳过,不崩溃。"""
out = _format_prompt_for_ark('@real.jpg 内容', [('', '图片0'), ('real.jpg', '图片1')])
self.assertEqual(out, '图片1 内容')
# ────────────────────────────────────────────────
# 集成测试video_generate_view
# ────────────────────────────────────────────────
@override_settings(SEEDANCE_ENABLED=True, ARK_API_KEY='fake-test-key')
class TestVideoGenerateArkPrompt(TestCase):
"""经 POST /api/v1/video/generate 验证 prompt 转换 + DB 原文保留 + counter 对齐。"""
def setUp(self):
QuotaConfig.objects.get_or_create(pk=1)
self.team = Team.objects.create(
name='test-ark-prompt',
is_active=True,
monthly_spending_limit=10000,
markup_percentage=0,
balance=10000,
frozen_amount=0,
)
self.user = User.objects.create_user(
username='ark_prompt_user',
email='arkprompt@example.com',
password='testpass123',
team=self.team,
spending_limit=-1,
daily_generation_limit=-1,
monthly_generation_limit=-1,
)
self.client = APIClient()
self.client.force_authenticate(user=self.user)
# 建两个 local asset 方便多场景复用
self.group_a = AssetGroup.objects.create(
team=self.team, remote_group_id='group-fake-a', name='角色A',
)
self.asset_bibi = Asset.objects.create(
group=self.group_a, remote_asset_id='asset-fake-bibi', name='碧碧.jpg',
url='https://fake/bibi.jpg', asset_type='Image', status='active',
)
self.asset_bubu = Asset.objects.create(
group=self.group_a, remote_asset_id='asset-fake-bubu', name='布布.jpg',
url='https://fake/bubu.jpg', asset_type='Image', status='active',
)
def _post_generate(self, prompt, references):
return self.client.post('/api/v1/video/generate', {
'prompt': prompt,
'mode': 'universal',
'model': 'seedance_2.0',
'aspect_ratio': '9:16',
'duration': 5,
'resolution': '720p',
'references': references,
}, format='json')
@mock.patch('apps.generation.tasks.poll_video_task')
@mock.patch('apps.generation.views.create_task')
def test_view_converts_prompt_for_local_assets(self, mock_create_task, mock_poll):
"""prompt 里两个 @local 素材 → 发给火山的 prompt 变成「图片1/图片2」。"""
mock_create_task.return_value = {'id': 'ark-mock-1'}
prompt = '@碧碧.jpg 是碧儿,@布布.jpg 是步若'
resp = self._post_generate(prompt, [
{'url': f'asset://local-{self.asset_bibi.id}', 'type': 'image', 'label': '碧碧.jpg'},
{'url': f'asset://local-{self.asset_bubu.id}', 'type': 'image', 'label': '布布.jpg'},
])
self.assertEqual(resp.status_code, 202, resp.content)
self.assertTrue(mock_create_task.called, 'create_task must be called')
sent_prompt = mock_create_task.call_args.kwargs['prompt']
self.assertEqual(sent_prompt, '图片1 是碧儿图片2 是步若')
@mock.patch('apps.generation.tasks.poll_video_task')
@mock.patch('apps.generation.views.create_task')
def test_view_db_prompt_unchanged_for_reedit(self, mock_create_task, mock_poll):
"""DB.prompt 必须保留用户原文(含 @xxx.jpgreEdit 才能重建带缩略图的标签。"""
mock_create_task.return_value = {'id': 'ark-mock-2'}
prompt = '@碧碧.jpg 走过来'
resp = self._post_generate(prompt, [
{'url': f'asset://local-{self.asset_bibi.id}', 'type': 'image', 'label': '碧碧.jpg'},
])
self.assertEqual(resp.status_code, 202, resp.content)
rec = GenerationRecord.objects.filter(user=self.user).order_by('-id').first()
self.assertIsNotNone(rec)
self.assertEqual(rec.prompt, prompt) # 原文,不含 '图片1'
self.assertIn('@碧碧.jpg', rec.prompt)
self.assertNotIn('图片1', rec.prompt)
@mock.patch('apps.generation.tasks.poll_video_task')
@mock.patch('apps.generation.views.create_task')
def test_legacy_group_url_skips_replacement(self, mock_create_task, mock_poll):
"""asset://group-{id} 老路径counter 递增但不登记 labelWARNING 日志,@组名 原样留在 prompt。"""
mock_create_task.return_value = {'id': 'ark-mock-3'}
prompt = '@角色A 做动作'
with self.assertLogs('apps.generation.views', level='WARNING') as cm:
resp = self._post_generate(prompt, [
{'url': f'asset://group-{self.group_a.id}', 'type': 'image', 'label': '角色A'},
])
self.assertEqual(resp.status_code, 202, resp.content)
sent_prompt = mock_create_task.call_args.kwargs['prompt']
self.assertEqual(sent_prompt, prompt) # 未替换
# 验证 WARNING log
self.assertTrue(any('legacy asset://group-' in line for line in cm.output),
f'expected legacy warning, got: {cm.output}')
@mock.patch('apps.generation.tasks.poll_video_task')
@mock.patch('apps.generation.views.create_task')
def test_counter_alignment_with_mixed_local_and_group(self, mock_create_task, mock_poll):
"""关键回归group 展开 2 张图后,紧跟的 local asset 的 label 必须映射到「图片3」不是「图片1」。"""
mock_create_task.return_value = {'id': 'ark-mock-4'}
prompt = '@foo 是主角'
resp = self._post_generate(prompt, [
{'url': f'asset://group-{self.group_a.id}', 'type': 'image', 'label': '角色A'}, # 展开 2 张图
{'url': f'asset://local-{self.asset_bibi.id}', 'type': 'image', 'label': 'foo'},
])
self.assertEqual(resp.status_code, 202, resp.content)
sent_prompt = mock_create_task.call_args.kwargs['prompt']
# foo 对应第 3 张 imagegroup 两张在前)
self.assertEqual(sent_prompt, '图片3 是主角')
# content_items 验证长度
sent_content_items = mock_create_task.call_args.kwargs['content_items']
image_items = [it for it in sent_content_items if it['type'] == 'image_url']
self.assertEqual(len(image_items), 3)
@mock.patch('apps.generation.tasks.poll_video_task')
@mock.patch('apps.generation.views.create_task')
def test_counter_alignment_mixed_types(self, mock_create_task, mock_poll):
"""图片/音频独立计数 — 图片序号不因音频夹在中间而跳变。"""
mock_create_task.return_value = {'id': 'ark-mock-5'}
# 新建一个 audio asset
asset_audio = Asset.objects.create(
group=self.group_a, remote_asset_id='asset-fake-audio', name='speech.mp3',
url='https://fake/speech.mp3', asset_type='Audio', status='active',
)
prompt = '@碧碧.jpg 说 @speech.mp3 的话,@布布.jpg 听'
resp = self._post_generate(prompt, [
{'url': f'asset://local-{self.asset_bibi.id}', 'type': 'image', 'label': '碧碧.jpg'},
{'url': f'asset://local-{asset_audio.id}', 'type': 'audio', 'label': 'speech.mp3'},
{'url': f'asset://local-{self.asset_bubu.id}', 'type': 'image', 'label': '布布.jpg'},
])
self.assertEqual(resp.status_code, 202, resp.content)
sent_prompt = mock_create_task.call_args.kwargs['prompt']
# 碧碧=图片1, speech=音频1, 布布=图片2图片/音频独立计数)
self.assertEqual(sent_prompt, '图片1 说 音频1 的话图片2 听')
if __name__ == '__main__':
unittest.main(verbosity=2)

View File

@ -15,6 +15,7 @@ ERROR_MESSAGES = {
'InputAudioSensitiveContentDetected': '参考音频包含敏感内容,请更换音频后重试', 'InputAudioSensitiveContentDetected': '参考音频包含敏感内容,请更换音频后重试',
# Output content moderation # Output content moderation
'OutputVideoSensitiveContentDetected': '生成的视频包含敏感内容,已被系统拦截,请修改提示词后重试', 'OutputVideoSensitiveContentDetected': '生成的视频包含敏感内容,已被系统拦截,请修改提示词后重试',
'OutputVideoSensitiveContentDetected.PolicyViolation': '生成的视频涉及版权限制内容如知名IP、名人肖像等已被系统拦截请修改提示词后重试',
'OutputImageSensitiveContentDetected': '生成的图片包含敏感内容,已被系统拦截', 'OutputImageSensitiveContentDetected': '生成的图片包含敏感内容,已被系统拦截',
# Parameter errors # Parameter errors
'InvalidParameter': '请求参数无效,请检查输入内容', 'InvalidParameter': '请求参数无效,请检查输入内容',
@ -91,7 +92,7 @@ def _headers():
} }
def create_task(prompt, model, content_items, aspect_ratio, duration, def create_task(prompt, model, content_items, aspect_ratio, duration, resolution,
generate_audio=True, search_mode='off', seed=-1): generate_audio=True, search_mode='off', seed=-1):
"""Create a video generation task. """Create a video generation task.
@ -101,6 +102,9 @@ def create_task(prompt, model, content_items, aspect_ratio, duration,
content_items: List of media content dicts (image_url, video_url, audio_url). content_items: List of media content dicts (image_url, video_url, audio_url).
aspect_ratio: Video aspect ratio ('16:9', '9:16', etc.). aspect_ratio: Video aspect ratio ('16:9', '9:16', etc.).
duration: Video duration in seconds. duration: Video duration in seconds.
resolution: Output video resolution ('480p'|'720p'|'1080p'). 必填不设默认值避免调用者遗漏导致
静默降级1080p 任务若因默认值被意外降为 720p 会产生计费偏差违反准确性原则
注意1080p Seedance 2.0 支持
generate_audio: Whether to generate audio with the video. generate_audio: Whether to generate audio with the video.
search_mode: 'smart' to enable internet search, 'off' to disable. search_mode: 'smart' to enable internet search, 'off' to disable.
@ -119,6 +123,7 @@ def create_task(prompt, model, content_items, aspect_ratio, duration,
'content': content, 'content': content,
'generate_audio': generate_audio, 'generate_audio': generate_audio,
'ratio': aspect_ratio, 'ratio': aspect_ratio,
'resolution': resolution,
'duration': duration, 'duration': duration,
'watermark': False, 'watermark': False,
'seed': seed, 'seed': seed,

View File

@ -83,6 +83,7 @@ def _get_service():
'UpdateAssetGroup': ApiInfo('POST', '/', {'Action': 'UpdateAssetGroup', 'Version': API_VERSION}, {}, {}), 'UpdateAssetGroup': ApiInfo('POST', '/', {'Action': 'UpdateAssetGroup', 'Version': API_VERSION}, {}, {}),
'UpdateAsset': ApiInfo('POST', '/', {'Action': 'UpdateAsset', 'Version': API_VERSION}, {}, {}), 'UpdateAsset': ApiInfo('POST', '/', {'Action': 'UpdateAsset', 'Version': API_VERSION}, {}, {}),
'DeleteAsset': ApiInfo('POST', '/', {'Action': 'DeleteAsset', 'Version': API_VERSION}, {}, {}), 'DeleteAsset': ApiInfo('POST', '/', {'Action': 'DeleteAsset', 'Version': API_VERSION}, {}, {}),
'DeleteAssetGroup': ApiInfo('POST', '/', {'Action': 'DeleteAssetGroup', 'Version': API_VERSION}, {}, {}),
} }
return Service(service_info, api_info) return Service(service_info, api_info)
@ -225,3 +226,9 @@ def delete_asset(asset_id: str):
"""Delete a single asset from the remote API.""" """Delete a single asset from the remote API."""
body = {'Id': asset_id, 'ProjectName': PROJECT_NAME} body = {'Id': asset_id, 'ProjectName': PROJECT_NAME}
_do_request('DeleteAsset', body) _do_request('DeleteAsset', body)
def delete_asset_group(group_id: str):
"""Delete an asset group and cascade-delete all its assets on the remote API."""
body = {'Id': group_id, 'ProjectName': PROJECT_NAME}
_do_request('DeleteAssetGroup', body)

View File

@ -22,20 +22,62 @@ RESOLUTION_MAP = {
('480p', '1:1'): (640, 640), ('480p', '1:1'): (640, 640),
('480p', '3:4'): (560, 752), ('480p', '3:4'): (560, 752),
('480p', '21:9'): (992, 432), ('480p', '21:9'): (992, 432),
# 1080p (来自火山 API 文档Seedance 2.0 & 2.0 fast 列)
('1080p', '16:9'): (1920, 1080),
('1080p', '9:16'): (1080, 1920),
('1080p', '4:3'): (1664, 1248),
('1080p', '1:1'): (1440, 1440),
('1080p', '3:4'): (1248, 1664),
('1080p', '21:9'): (2206, 946),
} }
# 默认帧率 # 默认帧率
DEFAULT_FPS = 24 DEFAULT_FPS = 24
def get_resolution(aspect_ratio: str, tier: str = '720p') -> tuple: def get_resolution(aspect_ratio: str, tier: str) -> tuple:
"""根据宽高比和分辨率档位返回 (width, height) 像素值。""" """根据宽高比和分辨率档位返回 (width, height) 像素值。
return RESOLUTION_MAP.get((tier, aspect_ratio), (1280, 720))
tier 必填不设默认值 避免调用者遗漏时静默降级为 720p违反计费准确性原则
(tier, aspect_ratio) 组合不在 RESOLUTION_MAP adaptiveraise KeyError
让上游感知并 fail loud上游serializer/前端负责保证合法组合
"""
key = (tier, aspect_ratio)
if key not in RESOLUTION_MAP:
raise KeyError(
f'不支持的分辨率组合: tier={tier!r}, aspect_ratio={aspect_ratio!r}. '
f'仅支持 480p/720p/1080p × 16:9/9:16/4:3/1:1/3:4/21:9'
)
return RESOLUTION_MAP[key]
def estimate_tokens(width: int, height: int, duration: int, fps: int = DEFAULT_FPS) -> int: def estimate_tokens(
"""预估视频生成消耗的 tokens。""" width: int,
return round(width * height * fps * duration / 1024) height: int,
duration: int,
fps: int = DEFAULT_FPS,
input_video_duration: float = 0,
) -> int:
"""预估视频生成消耗的 tokens。
火山官方公式`(输入视频时长 + 输出视频时长) × × × 帧率 / 1024`
这是预估值仅用于前端展示和额度冻结
真实费用以火山 API 返回的 usage.total_tokens 为准`_settle_payment` 中按实际值结算
最低 token 用量限制是火山计费端的逻辑我方不在预估端维护该表避免与官方脱钩
Args:
width: 输出视频宽度像素
height: 输出视频高度像素
duration: 输出视频时长
fps: 帧率默认 24
input_video_duration: 输入参考视频的总时长默认 0
Returns:
token 估算值整数
"""
total_duration = duration + (input_video_duration or 0)
return round(width * height * fps * total_duration / 1024)
def calculate_cost(tokens: int, base_price, markup_percentage) -> Decimal: def calculate_cost(tokens: int, base_price, markup_percentage) -> Decimal:

View File

@ -0,0 +1,85 @@
# [请填写客户名称]Seedance 2.0 1080P
> 本文档仅供方舟保底客户查阅,请勿发给没有签约保底的客户
### 功能说明
目前seedance2.0产出的1080p暂时无法支持产物受信功能即seedance2.0产出的含有人脸的1080p视频将接受安全审查如果您需要参考含有人脸的1080p视频请您将该视频上传至虚拟素材库
#### **功能1 输出视频分辨率 支持 1080P**
* **上线时间**预计国内外4月16日22:00完成上线
* **用户范围**
* **“抢先体验计划”:功能上线后 72 小时内,部分用户可抢先体验**,官网文档暂不更新
* 72 小时后,面向全部用户开放,官网文档同步公开
* **支持模型**仅限Seedance 2.0Seedance 2.0 fast 不支持)
* **使用方式:**&#x5728;请求参数`resolution`中传入`1080p`
```c++
curl https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ARK_API_KEY" \
-d '{
"model": "doubao-seedance-2-0-260128",
"content": [
{
"type": "text",
"text": "全程使用视频1的第一视角构图全程使用音频1作为背景音乐。第一人称视角果茶宣传广告seedance牌「苹苹安安」苹果果茶限定款首帧为图片1你的手摘下一颗带晨露的阿克苏红苹果轻脆的苹果碰撞声2-4 秒快速切镜你的手将苹果块投入雪克杯加入冰块与茶底用力摇晃冰块碰撞声与摇晃声卡点轻快鼓点背景音「鲜切现摇」4-6 秒第一人称成品特写分层果茶倒入透明杯你的手轻挤奶盖在顶部铺展在杯身贴上粉红包标镜头拉近看奶盖与果茶的分层纹理6-8 秒第一人称手持举杯你将图片2中的果茶举到镜头前模拟递到观众面前的视角杯身标签清晰可见背景音「来一口鲜爽」尾帧定格为图片2。背景声音统一为女生音色。"
},
{
"type": "image_url",
"image_url": {
"url": "https://ark-project.tos-cn-beijing.volces.com/doc_image/r2v_tea_pic1.jpg"
},
"role": "reference_image"
},
{
"type": "image_url",
"image_url": {
"url": "https://ark-project.tos-cn-beijing.volces.com/doc_image/r2v_tea_pic2.jpg"
},
"role": "reference_image"
},
{
"type": "video_url",
"video_url": {
"url": "https://ark-project.tos-cn-beijing.volces.com/doc_video/r2v_tea_video1.mp4"
},
"role": "reference_video"
},
{
"type": "audio_url",
"audio_url": {
"url": "https://ark-project.tos-cn-beijing.volces.com/doc_audio/r2v_tea_audio1.mp3"
},
"role": "reference_audio"
}
],
"resolution": "1080p",
"generate_audio":true,
"ratio": "16:9",
"duration": 11,
"watermark": false
}'
```
#### **功能2 输入视频分辨率 支持 1080P**
* **功能说明**:对输入视频的总像素限制扩大至 20868762206x946支持传入1080P视频作为参考
* **上线时间**预计国内外4月16日 22:00 完成上线
* **用户范围**:全部用户可用,官网文档同步公开
* **支持模型**Seedance 2.0、Seedance 2.0 fast 均支持
### 费用说明
1080P 和 720P/480P 视频区分定价
价格详见https://www.volcengine.com/docs/82379/1544106?lang=zh#02affcb8

View File

@ -0,0 +1,81 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=DeleteAssetGroup&Version=2024-01-01`
删除素材资产组Asset Group
:::warning
* 删除素材组将批量删除组内所有素材资产,该操作不可逆,一经删除,不可恢复,请谨慎操作。
* 如待删除的素材组包含较多素材资产,删除操作可能耗费一定时间。
* 对于在方舟控制台创建的真人素材组,**仅可删除授权已过期或已拒绝接收的素材组;** 授权有效期内、有效期未开始或已接收的素材无法删除。
:::
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="lvNm2K2e"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2333601) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="jTfWOr00"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="jsH8fA91"></span>
## 请求参数
<span id="LZAiABbI"></span>
### 请求体
---
**Id** `string` %%require%%
需删除的素材资产组 ID。
---
**ProjectName** `string`
需删除素材资产组所属的项目名称,默认值为 default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="pzKKGMI2"></span>
## 响应参数
:::tip
本接口无业务返回参数。
:::
---
<span id="W7hm7e18"></span>
## 请求示例
```text
POST /?Action=DeleteAssetGroup&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "group-2026**********-*****",
"ProjectName": "default"
}
```
<span id="wUMBBtP9"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "DeleteAssetGroup",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {}
}
```

View File

@ -0,0 +1,74 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=DeleteAsset&Version=2024-01-01`
本文介绍删除素材资产AssetAPI 的输入输出参数,供您使用接口时查阅字段含义。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="XUjyLha2Xp"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2333601) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="ja8gRJxaz4"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Id** `string` %%require%%
需要删除的 Asset素材资产的 Id。
---
**ProjectName** `string`
需要删除的 Asset素材资产所属的项目名称默认值为default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
:::tip
本接口无业务返回参数。
:::
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=DeleteAsset&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "Asset-2026**********-*****",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "DeleteAsset",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {}
}
```

View File

@ -0,0 +1,86 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=UpdateAsset&Version=2024-01-01`
本文介绍更新素材资产信息AssetAPI 的输入输出参数,供您使用接口时查阅字段含义。当前仅支持更新 `Name`。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="GKGUlkIXAR"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2318269) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="Hufey2Y56Z"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Id** `string` %%require%%
需要更新的 Asset素材资产的 Id。
---
**Name** `string`
需要更新的 Asset素材资产的新名称上限为 64 个字符。
---
**ProjectName** `string`
需要更新的 Asset素材资产所属的项目名称默认值为default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**Id** `string`
Asset素材资产的 Id。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=UpdateAsset&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "Asset-2026**********-*****",
"Name": "new-name",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "UpdateAsset",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"Id": "Asset-2026**********-*****"
}
}
```

View File

@ -0,0 +1,93 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=UpdateAssetGroup&Version=2024-01-01`
更新单个 Asset Group素材资产组合信息。当前仅支持更新 Asset Group素材资产组合的 Name 和 Description。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="dZF0anlOBU"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2318269) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="peae1e0Xvc"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Id** `string` %%require%%
需要更新的 Asset Group素材资产组合的 Id。
---
**Name** `string`
需要更新的 Asset Group素材资产组合的新名称上限为 64 个字符。
---
**Description** `string`
需要更新的 Asset Group素材资产组合的新描述上限为 300 字符。
---
**ProjectName** `string`
需要更新的 Asset Group素材资产组合所属的项目名称默认值为default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**Id** `string`
Asset Group素材资产组合的 Id。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=UpdateAssetGroup&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "group-2026**********-*****",
"Name": "new-name",
"Description": "new-description",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "UpdateAssetGroup",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"Id": "group-2026**********-*****"
}
}
```

View File

@ -0,0 +1,168 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=GetAsset&Version=2024-01-01`
查询素材资产状态,确认素材是否已完成预处理并可用于推理。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="k6xoCAAzLe"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2318269) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="NcZDHXiFXA"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Id** `string` %%require%%
Asset素材资产的 Id。
---
**ProjectName** `string`
需要查询的 Asset素材资产所属的项目名称默认值为 `default`。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**Id** `string`
Asset素材资产的 Id。
---
**Name** `string`
Asset素材资产的名称上限为 64 个字符。
---
**URL** `string`
Asset素材资产的访问地址。有效期为 12 小时,请及时保存。
---
**AssetType** `string`
Asset素材资产的类型。可选值
* `Image`
* `Video`
* `Audio`
---
**GroupId** `string`
Asset素材资产所属的 Asset Group素材资产组合的 Id。
---
**Status** `string`
素材资产状态。可选值:
* `Active`:已处理完毕,可以使用
* `Processing`:正在预处理,无法使用
* `Failed`:处理失败
---
**Error** `object`
错误信息。
属性
---
Error.**Code** `string`
错误码。
---
Error.**Message** `string`
错误信息。
---
**CreateTime** `string`
创建时间。
---
**UpdateTime** `string`
更新时间。
---
**ProjectName** `string`
资源所属的项目名称。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=GetAsset&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "Asset-2026**********-*****",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "GetAsset",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"Id": "Asset-2026**********-*****",
"Name": "test",
"URL": "https://example.com/asset-url",
"AssetType": "Image",
"GroupId": "group-2026**********-*****",
"Status": "Active",
"Error": {
"Code": "",
"Message": ""
},
"CreateTime": "2026-03-28T00:00:00Z",
"UpdateTime": "2026-03-28T00:00:00Z",
"ProjectName": "default"
}
}
```

View File

@ -0,0 +1,280 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=ListAssets&Version=2024-01-01`
查询符合筛选条件的 Assets素材资产列表。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="X3ZqVHf6Rr"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2318269) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="hEYP88LFdX"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Filter** `object` %%require%%
搜索的过滤条件。
属性
---
Filter.**GroupIds** `string[]`
Asset素材资产所属的 Asset Group素材资产组合的 Id 列表。
---
Filter.**GroupType** `string` %%require%%
Asset Group素材资产组合的类型。可选值
* `AIGC`:虚拟人像
* `LivenessFace`:真人素材
---
Filter.**Statuses** `string[]`
素材资产状态。可选值:
* `Active`:素材资产已处理完毕,可以使用
* `Processing`:素材资产正在预处理,无法使用
* `Failed`:素材资产处理失败
---
Filter.**Name** `string`
Asset素材资产的名称上限为 64 个字符。
---
**PageNumber** `integer (i64)` %%require%%
搜索页码,从 1 开始。例如:`1` 表示返回第一页的搜索结果。
---
**PageSize** `integer (i64)` %%require%%
每页搜索结果的数量,上限为 100。
---
**SortBy** `string`
用于排序的字段名称,默认值为 `CreateTime`。可选值:
* `CreateTime`:根据创建时间排序
* `UpdateTime`:根据更新时间排序
* `GroupId`:根据资产素材组的 Id 排序
---
**SortOrder** `string`
排序顺序,默认值为 `Desc`。可选值:
* `Desc`:降序
* `Asc`:升序
---
**ProjectName** `string`
资源所属的项目名称默认值为default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**Items** `object[]`
符合筛选条件的 Asset素材资产数组。
属性
---
Items.**Id** `string`
Asset素材资产的 Id。
---
Items.**Name** `string`
Asset素材资产的名称上限为 64 个字符。
---
Items.**URL** `string`
Asset素材资产的公共可访问地址。有效期为 12 小时,请及时保存。
---
Items.**GroupId** `string`
Asset素材资产所属的 Asset Group素材资产组合的 Id。
---
Items.**AssetType** `string`
Asset素材资产的类型。可选值
* `Image`:图像
* `Video`:视频
* `Audio`:音频
---
Items.**Status** `string`
任务状态。可选值:
* `Active`
* `Processing`
* `Failed`
---
Items.**Error** `object`
错误信息。
属性
---
Items.Error.**Code** `string`
错误码。
---
Items.Error.**Message** `string`
错误信息。
---
Items.**ProjectName** `string`
资源所属的项目名称。
---
Items.**CreateTime** `string`
创建时间。
---
Items.**UpdateTime** `string`
更新时间。
---
**TotalCount** `integer (i64)`
返回总数。
---
**PageNumber** `integer (i64)`
返回的页数。
---
**PageSize** `integer (i64)`
每页搜索结果的数量,上限为 100。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=ListAssets&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Filter": {
"GroupIds": [
"group-2026**********-*****"
],
"GroupType": "AIGC",
"Statuses": [
"Active"
],
"Name": "test"
},
"PageNumber": 1,
"PageSize": 10,
"SortBy": "CreateTime",
"SortOrder": "Desc",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "ListAssets",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"Items": [
{
"Id": "Asset-2026**********-*****",
"Name": "test",
"URL": "https://example.com/asset-url",
"GroupId": "group-2026**********-*****",
"AssetType": "Image",
"Status": "Active",
"Error": {
"Code": "",
"Message": ""
},
"ProjectName": "default",
"CreateTime": "2026-03-28T00:00:00Z",
"UpdateTime": "2026-03-28T00:00:00Z"
}
],
"TotalCount": 1,
"PageNumber": 1,
"PageSize": 10
}
}
```

View File

@ -0,0 +1,125 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=GetAssetGroup&Version=2024-01-01`
获取单个Asset Group素材资产组合信息。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="HRBKuuT0TQ"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2333601) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="MtPj6OfJKp"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Id** `string` %%require%%
Asset Group素材资产组合的 Id。
---
**ProjectName** `string`
需要查询的 Asset Group素材资产组合所属的项目名称默认值为default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**Id** `string`
Asset Group素材资产组合的 Id。
---
**Name** `string`
Asset Group素材资产组合的名称上限为 64 个字符。
---
**Description** `string`
Asset Group素材资产组合的描述上限为 300 字符。
---
**GroupType** `string`
Asset Group素材资产组合的类型。可选值
* `AIGC`:虚拟人像
* `LivenessFace`:真人素材
---
**ProjectName** `string`
资源所属的项目名称。
---
**CreateTime** `string`
创建时间。
---
**UpdateTime** `string`
更新时间。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=GetAssetGroup&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Id": "group-2026**********-*****",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "GetAssetGroup",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"Id": "group-2026**********-*****",
"Name": "test",
"Description": "test",
"GroupType": "AIGC",
"ProjectName": "default",
"CreateTime": "2026-03-28T00:00:00Z",
"UpdateTime": "2026-03-28T00:00:00Z"
}
}
```

View File

@ -0,0 +1,220 @@
`POST https://ark.cn-beijing.volcengineapi.com/?Action=ListAssetGroups&Version=2024-01-01`
查询符合筛选条件的Asset Groups素材资产组合列表。
```mixin-react
return (<Tabs>
<Tabs.TabPane title="快速入口" key="NdTz6AmVwP"><RenderMd content={`<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span> [调用教程](https://www.volcengine.com/docs/82379/2333565) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span> [接口列表](https://www.volcengine.com/docs/82379/2318269) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span> [开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="jAo0Qz18Pr"><RenderMd content={`本接口仅支持 Access KeyAK/SK鉴权。
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="request-params"></span>
## 请求参数
<span id="request-body"></span>
### 请求体
---
**Filter** `object` %%require%%
搜索的过滤条件。
属性
Filter.**GroupIds** `array`
Asset Group素材资产组合的 Id 列表。
---
Filter.**GroupType** `string` %%require%%
Asset Group素材资产组合的类型。可选值
* `AIGC`:虚拟人像
* `LivenessFace`: 真人素材
---
Filter.**Name** `string`
Asset Group素材资产组合的名称上限为 64 个字符。
---
**PageNumber** `integer (i64)` %%require%%
搜索页码,可用于列表分页功能,从 1 开始。例如:"page_number": 1即返回第一页的搜索结果。
---
**PageSize** `integer (i64)` %%require%%
每页搜索结果的数量,上限为 100。
---
**SortBy** `string`
用于排序的字段名称,默认值为 `CreateTime`。可选值:
* `CreateTime`:根据创建时间排序
* `UpdateTime`:根据更新时间排序
---
**SortOrder** `string`
排序顺序,默认值为 `Desc`。可选值:
* `Desc`:降序
* `Asc`:升序
---
**ProjectName** `string`
资源所属的项目名称,默认值为 default。
若资源不在默认项目中,需填写正确的项目名称,获取项目名称,请查看 [文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
<span id="response-params"></span>
## 响应参数
---
**TotalCount ** `int (i64)`
返回的 Asset Group素材资产组合的总数。
---
**Items** `array[] `
符合筛选条件的 Asset Group素材资产组合数组。
---
Items.**Id** `string`
Asset Group素材资产组合的 Id。
---
Items.**Name** `string`
Asset Group素材资产组合的名称上限为64个字符。
---
Items.**Description** `string`
Asset Group素材资产组合的描述上限为 300 字符。
---
Items.**GroupType** `string`
Asset Group素材资产组合的类型。
* `AIGC`:虚拟人像。
* `LivenessFace`: 真人素材
---
Items.**ProjectName** `string`
资源所属的项目名称。
---
Items.**CreateTime** `string`
创建时间。
---
Items.**UpdateTime** `string`
更新时间。
---
<span id="jb2Vro9B"></span>
### **PageNumber ** int (i64)
返回的页数。
---
<span id="r0oQyygI"></span>
### **PageSize ** int (i64)
每页搜索结果的数量上限为100。
---
<span id=".6K-35rGC56S65L6L"></span>
## 请求示例
```text
POST /?Action=ListAssetGroups&Version=2024-01-01 HTTP/1.1
Host: ark.cn-beijing.volcengineapi.com
Content-Type: application/json
X-Date: 20260328T000000Z
X-Content-Sha256: 287e874e******d653b44d21e
Authorization: HMAC-SHA256 Credential=AKLTYz******/20260328/cn-beijing/ark/request, SignedHeaders=content-type;host;x-content-sha256;x-date, Signature=47a7d934******e41085f
{
"Filter": {
"Name": "test",
"GroupType": "AIGC"
},
"PageNumber": 1,
"PageSize": 10,
"SortBy": "CreateTime",
"SortOrder": "Desc",
"ProjectName": "default"
}
```
<span id=".5ZON5bqU56S65L6L"></span>
## 响应示例
```json
{
"ResponseMetadata": {
"RequestId": "20260328000000000000000000000000",
"Action": "ListAssetGroups",
"Version": "2024-01-01",
"Service": "ark",
"Region": "cn-beijing"
},
"Result": {
"TotalCount": 1,
"Items": [
{
"Id": "group-2026**********-*****",
"Name": "test",
"Title": "test",
"Description": "test",
"GroupType": "AIGC",
"ProjectName": "default",
"CreateTime": "2026-03-28T00:00:00Z",
"UpdateTime": "2026-03-28T00:00:00Z"
}
],
"PageNumber": 1,
"PageSize": 10
}
}
```

View File

@ -0,0 +1,820 @@
:::danger
* 仅限邀测用户阅读,请勿截图/分享给其他人员。
* 上传素材 CreateAsset API 为异步接口,系统处理可能出现排队,导致入库时间增加。不承诺上传时间 SLA。
* 素材资产应为虚拟人像,非虚拟人像类素材无需入库。
* 您需确保上传的虚拟人像符合以下条件:
* 您合法拥有该素材,并享有完整的使用及处分权限。素材不包含未获授权的第三方商标、标识类内容。
* 素材不得与任何自然人肖像或形象雷同,素材不存在抄袭、盗用情形,不会侵害任何第三方的人格权、知识产权等合法权益。
* 素材不包含违反法规、违背公序良俗、危害国家安全的内容。
:::
Seedance 2.0 系列模型具有完备的防范 Deepfake 和侵犯版权风险能力。在生成视频时,会对有风险的参考素材输入进行拦截,最大限度保证生成视频合规和安全性。
为确保创作者能充分利用 Seedance 2.0 强大的视频生成能力高效生成视频内容,同时规避 AI 生成内容的潜在风险,方舟推出了私域可信素材库。完成入库的可信素材将进入您的私域素材库,在视频生成中使用。
私域素材库使用流程如下:
<div style="text-align: center"><img src="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/fa131ff017324d228b8a07c9bde49d4d~tplv-goo7wpa0wc-image.image" width="3866px" /></div>
<span id="2b7bf522"></span>
# 素材资产库结构说明
* **Asset Group素材资产组合**:单个素材文件为一个 Asset每个 Asset 属于一个 Asset Group。
* 可以使用素材组自由管理素材,例如可将同一虚拟人物素材放入同一素材组合进行管理。
* **Asset素材资产**:一个素材文件(当前支持上传图像、视频、音频),是方舟 Seedance 2.0 系列模型可直接用于推理的可信资产。
:::tip
注意
* 仅需入库推理需使用的素材资产,不需使用的素材资产请勿入库。
* 仅可使用已入库素材资产的 Id (Asset ID) 进行视频生成,同一形象未入库素材无法使用。
* 每个上传的素材资产需经过预处理,可轮询调用 **GetAsset** 接口查询素材状态(对应参数为 **Status**,仅当状态变为 `Active` 后,该素材资产方可用于后续推理使用;若状态为 `Failed` 则表示处理失败,无法用于后续推理使用。**详情可参考**[示例:上传素材并使用 GetAsset 获取素材信息](/docs/82379/2333565#5c0ee427)**。**
:::
**以图像资产上传为例:**
* **单张图片文件格式要求:**
* 格式jpeg、png、webp、bmp、tiff、gif、heic/heif
* 宽高比(宽/高): (0.4, 2.5)
* 宽高长度px(300, 6000)
* 大小:单张图片小于 30 MB。
* 为保证上传的图片素材资产在后续生成视频时,**人物面部、服装细节等与上传的素材资产一致,​**推荐按照如下规则及示例将同一人物的多个素材传入同一资产组合:
* **人像资产内容最佳实践:**
:::tip
**全身参考图要求**
* 板式:竖版
* 图片内容:人物全身正面图片
:::
<div style="text-align: center"><img src="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/86fef6988c8449c2a3d9062b2fa50e96~tplv-goo7wpa0wc-image.image" width="333px" /></div>
:::tip
**人脸特写图要求**
* 板式: 竖版
* 图片内容人物正面无表情特写肩部以上人物面部占画面2/3左右
:::
<div style="text-align: center"><img src="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/6188f2b280eb43a3821644071a2c5485~tplv-goo7wpa0wc-image.image" width="272px" /></div>
<span id="d54e09a3"></span>
# 素材资产AssetsAPI 接口功能
:::warning
调用素材资产AssetsAPI 接口需使用 Access Key 鉴权,详情参考 [获取 API 访问密钥AK/SK](https://www.volcengine.com/docs/6257/64983?lang=zh)。
:::
<span id="85305caa"></span>
## 接口列表
<span id="72169511"></span>
## **Asset (Group) 创建接口**
1. [CreateAssetGroup](https://www.volcengine.com/docs/82379/2318270):创建素材资产组合。**首次创建素材资产组合时需在控制台签署授权函,详情参考**[私域虚拟人像素材资产库使用指南(邀测用户版)](/docs/82379/2333565)
2. [CreateAsset](https://www.volcengine.com/docs/82379/2318271):创建素材资产。该接口可用于上传个人素材资产,创建素材资产后可利用返回字段中的素材 **Id (需处于** **`Active` 状态)**用于 Seedance 2.0 系列模型生成视频。
<span id="5e9c0b10"></span>
## **Asset (Group) 管理接口**
* [ListAssetGroups](https://www.volcengine.com/docs/82379/2318272):查询素材资产组合列表。
* [ListAssets](https://www.volcengine.com/docs/82379/2318273):查询素材资产列表。
* [GetAsset](https://www.volcengine.com/docs/82379/2318274):查询素材资产信息。
* [GetAssetGroup](https://www.volcengine.com/docs/82379/2318275):查询素材资产组合信息。
* [UpdateAssetGroup](https://www.volcengine.com/docs/82379/2318276):更新素材资产组合信息。
* [UpdateAsset](https://www.volcengine.com/docs/82379/2318277):更新素材资产信息。
* [DeleteAsset](https://www.volcengine.com/docs/82379/2318278):删除单个素材资产。
* [DeleteAssetGroup](https://www.volcengine.com/docs/82379/2341606): 删除指定素材组。
<span id="987b4caa"></span>
## 限流要求
:::tip
* **QPS**API 接口**每秒**允许的请求总数上限,超出则请求报错。
* **QPM**API 接口**每分钟**允许的请求总数上限,超出则请求报错。
:::
| | | \
|接口名 |账号维度的限流 |
|---|---|
| | | \
|CreateAssetGroup |10 QPS |
| | | \
|CreateAsset |300 QPM |
| | | \
|ListAssetGroups |10 QPS |
| | | \
|ListAssets |10 QPS |
| | | \
|GetAsset |100 QPS |
| | | \
|GetAssetGroup |10 QPS |
| | | \
|UpdateAsset |10 QPS |
| | | \
|UpdateAssetGroup |10 QPS |
| | | \
|DeleteAsset |10 QPS |
| | | \
|DeleteAssetGroup |5 QPS |
<span id="5d0da843"></span>
# 使用教程
<span id="b4a41fe1"></span>
## 上传素材至私域虚拟人像库 API & 控制台)
您可将自有的虚拟形象上传至私域虚拟人像库。
:::danger
您需确保上传的虚拟人像符合以下条件:
您合法拥有该素材,并享有完整的使用及处分权限。素材不包含未获授权的第三方商标、标识类内容。
素材不得与任何自然人肖像或形象雷同,素材不存在抄袭、盗用情形,不会侵害任何第三方的人格权、知识产权等合法权益。
素材不包含违反法规、违背公序良俗、危害国家安全的内容。
:::
方舟将对您上传的素材进行安全审核。审核通过后,即可在体验中心和 API 中使用素材生成视频。
您可使用 OpenAPI 或在体验中心上传虚拟素材。
<span id="65934594"></span>
### 阅读并同意协议
首次入库前,需打开 [控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/overview?briefPage=0&briefType=introduce&type=new) > **开通管理** > **开通素材资产库权限,​**阅读和同意相关规则和协议:
<div style="text-align: center"><img src="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/4b083981c8ca48ddbcedd6e750061626~tplv-goo7wpa0wc-image.image" width="2938px" /></div>
先创建 Asset Group, 再向 Group 中添加虚拟人像素材。
:::tip
素材格式的具体要求,请参考[素材资产库结构说明](/docs/82379/2333565#2b7bf522)。
:::
<span id="f9a31891"></span>
### 使用控制台
1. 打开 [方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seedance-2-0-260128&tab=GenVideo) > **我的** > **虚拟人像**
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/b1152a6834bc4a7e8f5137474bf34391~tplv-goo7wpa0wc-image.image =1541x)
2. 创建素材组合。
3. 向素材组合中上传素材。
<span id="f96dab35"></span>
### 使用 API
先调用 `CreateAssetGroup` 接口创建素材组合,再调用 `CreateAsset` 接口向组合中上传素材。请求示例:
1. **创建素材组合**
:::tip
**注意**
* 调用素材资产AssetsAPI 接口需使用 Access Key 鉴权,详情参考 [API访问密钥管理](https://www.volcengine.com/docs/6257/64983?lang=zh)。
* API 参数信息请参考[私域虚拟人像库 API 参考文档](/docs/82379/2333601)。
* **素材库**[项目](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)**Project隔离说明**
向指定的 Asset Group素材资产组合内创建或查询 Asset素材资产需保证两者的 **ProjectName** 一致
Asset素材资产所属的 **ProjectName** 需与调用视频生成 API 接口时使用的 API key 所属的 **ProjectName** 一致
:::
使用 **POST` `**`CreateAssetGroup` 接口创建素材组合。
在请求中传入:
* **Name**:素材组合的名称。
* **Description**: 素材组合的文字描述。
* **GroupType**: 选填,默认为 AIGC虚拟人像素材
:::tip
当前仅支持 AIGC 类型。
:::
* **ProjectName**:选填,指定资源项目名称,默认为 default。一个项目中的资源仅可被该项目下的推理接入点使用获取项目名称请参考[文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
:::tip
**注意**
如果请求中不指定 **ProjectName**,默认将创建素材组至 **default** 项目中。
:::
请求示例:
**注意**:需使用 AK/SK 鉴权,详情参考 [API访问密钥管理](https://www.volcengine.com/docs/6257/64983?lang=zh)。
```Go
package main
import (
"fmt"
"github.com/bytedance/sonic"
"github.com/volcengine/volcengine-go-sdk/volcengine"
"github.com/volcengine/volcengine-go-sdk/volcengine/credentials"
"github.com/volcengine/volcengine-go-sdk/volcengine/session"
"github.com/volcengine/volcengine-go-sdk/volcengine/universal"
)
func main() {
config := volcengine.NewConfig().WithCredentials(credentials.NewStaticCredentials("<YOUR_AK>", "<YOUR_SK>", "")).WithRegion("cn-beijing")
sess, _ := session.NewSession(config)
resp, err := universal.New(sess).DoCall(
universal.RequestUniversal{
ServiceName: "ark",
Action: "CreateAssetGroup",
Version: "2024-01-01",
HttpMethod: universal.POST,
ContentType: universal.ApplicationJSON,
},
&map[string]any{
"Name": "figure_group_1",
"Description": "Figure group 1",
"ProjectName": "<PROJECT_NAME>",
},
)
if err != nil {
fmt.Printf("error: %v\n", err)
return
}
if resp == nil {
return
}
respData, err := sonic.Marshal(resp)
fmt.Println(string(respData))
}
```
返回示例:
```JSON
{
"Id":"group-20260318033332-*****"}
```
2. **上传素材**
:::danger
上传素材 CreateAsset API 为异步接口,系统处理可能出现排队,导致入库时间增加。不承诺上传时间 SLA。
视频素材处理将耗费更长时间。
:::
使用 **POST** `CreateAsset`接口上传素材。
在请求中提供:
* **GroupId**:必填,素材组合 ID
* **URL**: 必填,图片/视频/音频可访问的 URL
* **AssetType**: 必填,支持上传图片/视频/音频类型素材,需指定为 **Image/Video/Audio。**素材文件的具体限制详见 [Assets API 参考文档](https://www.volcengine.com/docs/82379/2318271)**。**
* **Name**: 选填,素材名称,可用于管理素材,如素材文件名。
:::tip
该字段仅用于使用 ListAssets 接口时模糊搜索素材,不会被带入模型推理。关于如何使用素材生成视频,请参考[使用虚拟人像](/docs/82379/2291680#2bf01416) 和[3. 提示词content.text中应该如何准确指代参考素材](/docs/82379/2333565#15e21eb8)。
:::
* **ProjectName**:选填,指定资源项目名称,默认为 **default**。一个项目中的资源仅可被该项目下的推理接入点使用,获取项目名称请参考[文档](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)。
:::tip
**注意**
如果请求中不指定 **ProjectName**,则默认上传素材至 **default** 项目中。您需使用该字段确保将素材上传至对应的项目中。
:::
**注意**
* 每次请求上传一个素材文件。
* 该请求返回素材 ID可使用 GetAsset API 查看是否上传成功。
```Go
package main
import (
"fmt"
"github.com/bytedance/sonic"
"github.com/volcengine/volcengine-go-sdk/volcengine"
"github.com/volcengine/volcengine-go-sdk/volcengine/credentials"
"github.com/volcengine/volcengine-go-sdk/volcengine/session"
"github.com/volcengine/volcengine-go-sdk/volcengine/universal"
)
func main() {
config := volcengine.NewConfig().WithCredentials(credentials.NewStaticCredentials("<YOUR_AK>", "<YOUR_SK>", "")).WithRegion("cn-beijing")
sess, _ := session.NewSession(config)
resp, err := universal.New(sess).DoCall(
universal.RequestUniversal{
ServiceName: "ark",
Action: "CreateAsset",
Version: "2024-01-01",
HttpMethod: universal.POST,
ContentType: universal.ApplicationJSON,
},
&map[string]any{
"GroupId": "group-20260318070359-*****",
"URL": "<IMAGE_URL>",
"AssetType": "Image",
"ProjectName": "<PROJECT_NAME>"
},
)
if err != nil {
fmt.Printf("error: %v\n", err)
return
}
if resp == nil {
return
}
respData, err := sonic.Marshal(resp)
fmt.Println(string(respData))
}
```
返回示例:
```JSON
{
"Id": "asset-20260318071009-*****"
}
```
<span id="cd721316"></span>
## 检索虚拟人像资产 API & 控制台)
您可使用以下方式检索虚拟人像资产。
* **控制台**:您可在 [方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seedance-2-0-260128&tab=GenVideo) > **我的** > **我的虚拟人像** 中搜索和查看已上传的虚拟人像资产。
* **API**
* **POST** `GetAsset `获取单个素材
* **POST** `ListAssets` 查询素材
* **POST** `ListAssetGroups` 查询素材组合信息
<span id="a32de856"></span>
### 获取单个素材信息
可使用 **POST** GetAsset 获取单个素材信息,指定素材资产 ID。
:::tip
要获取完整的 API 参数、限流等信息,请查看[私域虚拟人像库 API 参考文档](/docs/82379/2333601)。
:::
```Go
package main
import (
"fmt"
"github.com/bytedance/sonic"
"github.com/volcengine/volcengine-go-sdk/volcengine"
"github.com/volcengine/volcengine-go-sdk/volcengine/credentials"
"github.com/volcengine/volcengine-go-sdk/volcengine/session"
"github.com/volcengine/volcengine-go-sdk/volcengine/universal"
)
func main() {
config := volcengine.NewConfig().WithCredentials(credentials.NewStaticCredentials("your_ak", "your_sk", "")).WithRegion("cn-beijing")
sess, _ := session.NewSession(config)
resp, err := universal.New(sess).DoCall(
universal.RequestUniversal{
ServiceName: "ark",
Action: "GetAsset",
Version: "2024-01-01",
HttpMethod: universal.POST,
ContentType: universal.ApplicationJSON,
},
&map[string]any{
"Id": "asset-20260318070533-*****",
"ProjectName": "<PROJECT_NAME>", // 需确保填入素材所在项目的名称
},
)
if err != nil {
fmt.Printf("error: %v\n", err)
return
}
if resp == nil {
return
}
respData, err := sonic.Marshal(resp)
fmt.Println(string(respData))
}
```
返回示例:
```JSON
{
"GroupId": "group-20260318033332-*****",
"Status": "Active",
"CreateTime": "2026-03-18T03:57:10Z",
"AssetType": "Image",
"UpdateTime": "2026-03-18T03:57:14Z",
"ProjectName": "default",
"Id": "asset-20260318035710-*****",
"Name": "",
"URL": "https://ark-media-asset-stg.tos-cn-beijing.volces.com/2100000825/031807095608757847.jpg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=****&X-Tos-Expires=43200&X-Tos-Security-Token=****&X-Tos-Signature=****&X-Tos-SignedHeaders=host" // 有效期为 12 小时
}
```
<span id="8910c01c"></span>
### 查询素材资产
可使用 **POST** ListAssets 查询 Assets。
* 支持根据组合 ID (GroupId)、素材状态Statuses和素材名称Name查询。筛选出符合所有条件的素材。
* 支持使用 Name 进行模糊搜索,同时使用 GroupId 精确搜索,便于检索所需的素材。
支持使用 SortBySortOrder 对结果进行排序。
```Go
package main
import (
"fmt"
"github.com/bytedance/sonic"
"github.com/volcengine/volcengine-go-sdk/volcengine"
"github.com/volcengine/volcengine-go-sdk/volcengine/credentials"
"github.com/volcengine/volcengine-go-sdk/volcengine/session"
"github.com/volcengine/volcengine-go-sdk/volcengine/universal"
)
func main() {
config := volcengine.NewConfig().WithCredentials(credentials.NewStaticCredentials("<YOUR_AK>", "<YOUR_SK>", "")).WithRegion("cn-beijing")
sess, _ := session.NewSession(config)
resp, err := universal.New(sess).DoCall(
universal.RequestUniversal{
ServiceName: "ark",
Action: "ListAssets",
Version: "2024-01-01",
HttpMethod: universal.POST,
ContentType: universal.ApplicationJSON,
},
&map[string]any{
"Filter": map[string]any{
"GroupIds": []string{"group-20260318033332-*****"},
"GroupType": "AIGC",
"Statuses": []string{"Active", "Processing"}, // 支持 Active素材上传成功可使用Asset ID, Processing素材处理中, Failed素材上传失败
"Name": "figure", // 支持模糊搜索
},
"PageNumber": 1,
"PageSize": 10,
"SortBy": "GroupId",
"SortOrder": "Asc",
},
)
if err != nil {
fmt.Printf("list assets error: %v\n", err)
return
}
if resp == nil {
return
}
respData, err := sonic.Marshal(resp)
fmt.Println(string(respData))
}
```
返回示例:
```JSON
"Items": [
{
"Id": "asset-20260318035710-kctzf",
"Name": "",
"AssetType": "Image",
"CreateTime": "2026-03-18T03:57:10Z",
"UpdateTime": "2026-03-18T03:57:14Z",
"ProjectName": "default",
"URL": "image_url", // 有效期为 12 小时
"GroupId": "group-20260318033332-*****",
"Status": "Active"
},
{
"GroupId": "group-20260318033332-*****",
"Status": "Active",
"Id": "asset-20260318034804-*****",
"Name": "",
"URL": "https://ark-media-asset-stg.tos-cn-beijing.volces.com/2100000825/031807095608757847.jpg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=****&X-Tos-Expires=43200&X-Tos-Security-Token=****&X-Tos-Signature=****&X-Tos-SignedHeaders=host",
"AssetType": "Image",
"CreateTime": "2026-03-18T03:48:04Z",
"UpdateTime": "2026-03-18T03:48:08Z",
"ProjectName": "default"
}
],
"TotalCount": 2,
"PageNumber": 1,
"PageSize": 10
```
<span id="f95b9753"></span>
### 查询素材组
使用 **POST** ListAssetGroups 查询素材组合信息。
支持模糊搜索素材组合名称Name或提供多个素材组合GroupId
如有多个素材组,可使用 Name 字段进行模糊搜索。
:::tip
要获取完整的 API 参考文档,请查看[私域虚拟人像库 API 参考文档](/docs/82379/2333601)。
:::
```Go
package main
import (
"fmt"
"github.com/bytedance/sonic"
"github.com/volcengine/volcengine-go-sdk/volcengine"
"github.com/volcengine/volcengine-go-sdk/volcengine/credentials"
"github.com/volcengine/volcengine-go-sdk/volcengine/session"
"github.com/volcengine/volcengine-go-sdk/volcengine/universal"
)
func main() {
config := volcengine.NewConfig().WithCredentials(credentials.NewStaticCredentials("<YOUR_AK>", "<YOUR_SK>", "")).WithRegion("cn-beijing")
sess, _ := session.NewSession(config)
resp, err := universal.New(sess).DoCall(
universal.RequestUniversal{
ServiceName: "ark",
Action: "ListAssetGroups",
Version: "2024-01-01",
HttpMethod: universal.POST,
ContentType: universal.ApplicationJSON,
},
&map[string]any{
"Filter": map[string]any{
"Name": "figure_group", // Support fuzzy search
"GroupIds": []string{"group-20260318033332-*****"},
"GroupType": "AIGC",
},
"PageNumber": 1,
"PageSize": 10,
},
)
if err != nil {
fmt.Printf("error: %v\n", err)
return
}
if resp == nil {
return
}
respData, err := sonic.Marshal(resp)
fmt.Println(string(respData))
}
```
返回示例:
```JSON
{
"TotalCount": 1,
"Items": [
{
"UpdateTime": "2026-03-18T03:33:32Z",
"Id": "group-20260318033332-*****",
"Name": "figure_group_1",
"Title": "figure_group_1",
"Description": "Figure group 1",
"GroupType": "AIGC",
"ProjectName": "default",
"CreateTime": "2026-03-18T03:33:32Z"
}
],
"PageNumber": 1,
"PageSize": 10
}
```
<span id="e545fe77"></span>
### 更新/删除素材和素材组
请参考: [私域虚拟人像库 API 参考文档](/docs/82379/2333601)。
<span id="5c0ee427"></span>
## 示例:上传素材并使用 GetAsset 获取素材信息
以下示例创建素材资产后,查询资产 Status 并根据状态,判断是否继续查询或返回对应结果。
代码执行以下逻辑:
1. createAsset 上传资源,获取 AssetId
2. waitForAssetActive开始查询循环调用 getAssetStatus 查询当前资产状态
3. 根据 Status 判断
* Processing → 继续轮询
* Active → 返回 URL结束状态为 `Active` 后,可使用该素材 Asset ID (URI格式) 进行视频生成,如何使用人像素材生成视频,详见[使用虚拟人像](/docs/82379/2291680#2bf01416)。
* Failed → 返回错误(结束)
4. 返回结果并打印结果
<Attachment link="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/2782fff499e24d2cbb7836229b428ab4~tplv-goo7wpa0wc-image.image" name="Upload_Asset_Get_Info.go" ></Attachment>
查询结果示意如下:
```JSON
asset status: Active
asset is active, URL = https://ark-media-asset-stg.tos-cn-beijing.volces.com/2100000825/031807095608757847.jpg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=****&X-Tos-Expires=43200&X-Tos-Security-Token=****&X-Tos-Signature=****&X-Tos-SignedHeaders=host
```
<span id="ca82b8d7"></span>
## 其他编程语言示例
查看更多语言的示例代码请下载:
<Attachment link="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/bcc7d8a175744793a79b09027f0cf1ee~tplv-goo7wpa0wc-image.image" name="demo.zip" ></Attachment>
:::tip
注意替换 Demo 中的 AK与SK若需调用其他接口如 ListAssets需替换 ACTION 与对应请求参数。
:::
<span id="c78f9931"></span>
## 使用人像素材生成视频
在获取素材 Asset ID后可使用私域人像素材生成视频。效果预览及使用方式请参考下文。
<span id="225e69c7"></span>
### 视频生成
在 Video Generation API 的 **content.<模态>_url.url** 字段中使用 素材 URI 生成视频。
:::tip
资产 URI 拼接方式:`asset://<asset_ID`**`>`**
:::
具体方式请参考 [Seedance 2.0 教程](https://www.volcengine.com/docs/82379/2291680?lang=zh) 和 [Seedance 2.0 API 参考](https://www.volcengine.com/docs/82379/1520757?lang=zh)。
:::tip
在传入给模型的 Prompt 中,需要使用**图片 1**、**视频 1** 的方式指代参考素材,素材序号为素材在请求体中的顺序。请勿直接在 Prompt 中直接使用 Asset ID。
例:“**图片1** 里的女孩身着**图片2**中的服装,正在整理柜台上的物品。**图片3**中的男孩是一位顾客,他走上前,想要向女孩索要联系方式。”
调用示例请参考[3. 提示词content.text中应该如何准确指代参考素材](/docs/82379/2333565#15e21eb8)
:::
示例代码:
```Python
import os
import time
# Install SDK: pip install 'volcengine-python-sdk[ark]'
from volcenginesdkarkruntime import Ark
client = Ark(
# The base URL for model invocation
base_url='https://ark.cn-beijing.volces.com/api/v3',
# Get API Keyhttps://console.volcengine.com/ark/region:ark+cn-beijing/apikey
api_key=os.environ.get("ARK_API_KEY"),
)
if __name__ == "__main__":
print("----- create request -----")
create_result = client.content_generation.tasks.create(
model="doubao-seedance-2-0-260128", # Replace with Model ID
content=[
{
"type": "text",
"text": "图片1中美妆博主用中文进行介绍妆容改为明艳大气去掉脸部反光笑容甜美近景镜头手持图片2的面霜面向镜头展示清新简约背景元气甜美风格。博主台词挖到本命面霜了质地像云朵一样软糯一抹就吸收熬夜急救、补水保湿全搞定素颜都自带柔光感。"
},
{
"type": "image_url",
"image_url": {
"url": "asset://asset-20260224200602-qn7wr" # Asset ID
},
"role": "reference_image"
},
{
"type": "image_url",
"image_url": {
"url": "https://ark-project.tos-cn-beijing.volces.com/doc_image/r2v_edit_pic1.jpg"
},
"role": "reference_image"
},
],
generate_audio=True,
ratio="16:9",
duration=11,
watermark=True,
)
print(create_result)
print("----- polling task status -----")
task_id = create_result.id
while True:
get_result = client.content_generation.tasks.get(task_id=task_id)
status = get_result.status
if status == "succeeded":
print("----- task succeeded -----")
print(get_result)
break
elif status == "failed":
print("----- task failed -----")
print(f"Error: {get_result.error}")
break
else:
print(f"Current status: {status}, Retrying after 30 seconds...")
time.sleep(30)
```
<span id="9f864be5"></span>
# 常见问题
<span id="cbc4063e"></span>
#### 1. 为什么素材上传成功后,无法使用素材生成视频或获取素材信息?
素材库按[项目](https://www.volcengine.com/docs/82379/1359411?lang=zh#03ec4a65)**Project隔离**。
* 在视频生成时,必须使用**素材所在项目**中的推理接入点进行推理。
* 如果素材上传成功,但使用获取素材接口获取素材失败,可能是因为调用上传素材(CreateAsset)和获取素材接口时传入了不同的 **ProjectName**
* **ProjectName** 默认值为 `default`,即如果不指定该字段,则默认将资源创建至 `default` 项目中。
* 建议在同一个项目中管理素材。
<span id="617ff561"></span>
#### 2. 怎样管理用户对素材库的权限?
您可使用[访问控制](https://console.volcengine.com/iam/identitymanage/user) IAM精细化管理用户操作素材库的权限。可按以下方式设置
1. **创建自定义策略**
1. 打开[访问控制](https://console.volcengine.com/iam/policymanage) > **新建自定义策略**
2. 输入策略名称。
3. 切换到 **JSON编辑器**,将下方自定义策略粘贴至编辑器中,点击 **提交** 保存。
<div style="text-align: center"><img src="https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/937e2b58f8294223a06f3860fc461f15~tplv-goo7wpa0wc-image.image" width="1125px" /></div>
```Python
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"ark:*Asset*"
],
"Resource": [
"*"
]
}
]
}
```
2. **为用户/用户组赋权**
1. 点击 **用户管理** > **用户**/**用户组**,选择需要赋权的用户或用户组,点击右侧的 **添加权限。**
2. 在 **授权策略** 中选择**步骤 1** 中创建的策略。
3. (可选)在 **限制到项目资源** 中选择策略应用的项目。
4. 点击 **提交。**
完成上述操作后,该用户/用户组即可在对应项目中管理素材。
关于 IAM 的更多信息,请参考[访问控制](http://volcengine.com/docs/6257?lang=zh)。
<span id="15e21eb8"></span>
#### 3. 提示词content.**text**)中应该如何准确指代参考素材?
需在提示词输入中使用”**素材类型+序号**”格式引用素材,例如 **图片 1**、**视频 1**、**音频 1**。序号为请求体中该素材在同类素材中的排序。
**注意**:请勿在提示词中使用 Asset ID 指代素材。
例如,下方示例中包含 5 张参考图和 1 个参考音频,可参考示例提示词的写法引用素材。
* **参考:**
<div style="display: flex;">
<div style="flex-shrink: 0;width: calc((100% - 64px) * 0.2000);">
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/bc3f0a1951c94cd282c690d2f8a938e0~tplv-goo7wpa0wc-image.image =426x)
图片 1
</div>
<div style="flex-shrink: 0;width: calc((100% - 64px) * 0.2000);margin-left: 16px;">
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/c9b934d1e50246cdb840318f59e4f00a~tplv-goo7wpa0wc-image.image =157x)
图片 2
</div>
<div style="flex-shrink: 0;width: calc((100% - 64px) * 0.2000);margin-left: 16px;">
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/e987f41012a24a6fa8e746126916a933~tplv-goo7wpa0wc-image.image =534x)
图片 3
</div>
<div style="flex-shrink: 0;width: calc((100% - 64px) * 0.2000);margin-left: 16px;">
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/f74e55364f664c67885761a1a02648ae~tplv-goo7wpa0wc-image.image =674x)
图片 4
</div>
<div style="flex-shrink: 0;width: calc((100% - 64px) * 0.2000);margin-left: 16px;">
![Image](https://p9-arcosite.byteimg.com/tos-cn-i-goo7wpa0wc/1ed8333cf28649e9a6efdef54529e436~tplv-goo7wpa0wc-image.image =574x)
图片 5
</div>
</div>
* **提示词**
```Plain Text
清新奶油画风短剧,轻快吉他卡点快切,奶油白主色 + 蜜桃粉高光画面柔和无特效靠表情传情。0-2 秒:快切 2 镜图片 1中的霸总不小心撞到穿着图片 2的衣服的图片 3中的女主两人错愕对视+ 霸总扯下自己的西装外套披在女主身上(手部特写)」,背景吉他声起,咖啡杯掉落 / 衣服摩擦的轻柔音效2-6 秒:快切 3 镜「女主穿霸总外套低头偷笑(脸颊泛红特写)+ 霸总看着女主背影嘴角微扬,说“我们一起走吧”参考音频 1侧颜 + 两人在雨夜共撑一把黑伞,指尖相触快速收回(近景)」,雨天背景为图片 4每镜卡点轻鼓重拍配雨滴落地 / 伞骨撑开的音效画面带轻微柔雾质感6-8 秒:慢放两人对视笑眼,画面右下角出现图片 5的文字部分左下角小字「NEW EP DAILY」背景飘淡粉色花瓣极简BGM 落温柔尾音,画面定格两人同框侧脸。
```
* **示例代码**
```Bash
curl --location 'https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks' \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ARK_API_KEY"\
-d '{
"model": "doubao-seedance-2-0-260128",
"content": [
{
"type": "text",
"text": "清新奶油画风短剧,轻快吉他卡点快切,奶油白主色 + 蜜桃粉高光画面柔和无特效靠表情传情。0-2 秒:快切 2 镜图片 1中的霸总不小心撞到穿着图片 2的衣服的图片 3中的女主两人错愕对视+ 霸总扯下自己的西装外套披在女主身上(手部特写)」,背景吉他声起,咖啡杯掉落 / 衣服摩擦的轻柔音效2-6 秒:快切 3 镜「女主穿霸总外套低头偷笑(脸颊泛红特写)+ 霸总看着女主背影嘴角微扬,说“我们一起走吧”参考音频 1侧颜 + 两人在雨夜共撑一把黑伞,指尖相触快速收回(近景)」,雨天背景为图片 4每镜卡点轻鼓重拍配雨滴落地 / 伞骨撑开的音效画面带轻微柔雾质感6-8 秒:慢放两人对视笑眼,画面右下角出现图片 5的文字部分左下角小字「NEW EP DAILY」背景飘淡粉色花瓣极简BGM 落温柔尾音,画面定格两人同框侧脸。"
},
{
"type": "image_url",
"role": "reference_image",
"image_url": {
"url": "asset://asset-20260224185115-hnjhb"
}
},
{
"type": "image_url",
"role": "reference_image",
"image_url": {
"url": "asset://asset-20260224185115-8gghm"
}
},
{
"type": "image_url",
"role": "reference_image",
"image_url": {
"url": "asset://asset-20260224185115-cjkwr"
}
},
{
"type": "image_url",
"role": "reference_image",
"image_url": {
"url": "asset://asset-20260224185115-pxbk9"
}
},
{
"type": "image_url",
"role": "reference_image",
"image_url": {
"url": "asset://asset-20260224185115-2c698"
}
},
{
"type": "audio_url",
"role": "reference_audio",
"audio_url": {
"url": "asset://asset-20260224185115-dp9qm"
}
}
],
"generate_audio": true,
"ratio": "16:9",
"duration": 11,
"watermark": false
}'
```

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,481 @@
不同模型服务支持的能力及单价各不相同,本文为您介绍各模型的计费公式及单价,方便您进行模型价格查阅和比较。
:::tip
* 如需了解计费方式及详细计费逻辑,请参见 [模型服务计费说明](/docs/82379/1544681)。
* 支持通过 [价格计算器](https://www.volcengine.com/pricing?product=ark_bd&tab=2) **预估** 满足业务需求所需的费用。
* 本文价格和 [定价详情页](https://www.volcengine.com/pricing?product=ark_bd&tab=1) 仅作为商品规格和价格的参考,具体可购买的商品规格及费用请以实际下单结果为准。
:::
<span id="76de5911"></span>
# 大语言模型
<span id="aa1874cf"></span>
## 在线推理(常规)
<span aceTableMode="list" aceTableWidth="3,2,1,1,1,1"></span>
|模型名称 |条件|输入|缓存存储|缓存输入|输出|\
| |千 token |元/百万token |元/百万 token /小时 |元/百万token |元/百万token |
|---|---|---|---|---|---|
|doubao\-seed\-2.0\-pro |输入长度 [0, 32] |3.2 |0.017 |0.64 |16.0 |
|^^|输入长度 (32, 128] |4.8 |0.017 |0.96 |24.0 |
|^^|输入长度 (128, 256] |9.6 |0.017 |1.92 |48.0 |
|doubao\-seed\-2.0\-lite |输入长度 [0, 32] |0.6 |0.017 |0.12 |3.6 |
|^^|输入长度 (32, 128] |0.9 |0.017 |0.18 |5.4 |
|^^|输入长度 (128, 256] |1.8 |0.017 |0.36 |10.8 |
|doubao\-seed\-2.0\-mini |输入长度 [0, 32] |0.2 |0.017 |0.04 |2.0 |
|^^|输入长度 (32, 128] |0.4 |0.017 |0.08 |4.0 |
|^^|输入长度 (128, 256] |0.8 |0.017 |0.16 |8.0 |
|doubao\-seed\-2.0\-code |输入长度 [0, 32] |3.2 |0.017 |0.64 |16.0 |
|^^|输入长度 (32, 128] |4.8 |0.017 |0.96 |24.0 |
|^^|输入长度 (128, 256] |9.6 |0.017 |1.92 |48.0 |
|doubao\-seed\-1.8 |输入长度 [0, 32]|0.80 |0.017 |0.16 |2.00 |\
| |且输出长度 [0, 0.2] | | | | |
|^^|输入长度 [0, 32]|0.80 |0.017 |0.16 |8.00 |\
| |且输出长度 (0.2,+∞) | | | | |
|^^|输入长度 (32, 128] |1.20 |0.017 |0.16 |16.00 |
|^^|输入长度 (128, 256] |2.40 |0.017 |0.16 |24.00 |
|doubao\-seed\-character |输入长度 [0, 32] |0.80 |0.017 |0.16 |2.00 |
|^^|输入长度 (32, 128] |1.20 |0.017 |0.16 |6.00 |
|doubao\-seed\-code |输入长度 [0, 32] |1.20 |0.017 |0.24 |8.00 |
|^^|输入长度 (32, 128] |1.40 |0.017 |0.24 |12.00 |
|^^|输入长度 (128, 256] |2.80 |0.017 |0.24 |16.00 |
|doubao\-seed\-1.6 |输入长度 [0, 32]|0.80 |0.017 |0.16 |2.00 |\
| |且输出长度 [0, 0.2] | | | | |
|^^|输入长度 [0, 32]|0.80 |0.017 |0.16 |8.00 |\
| |且输出长度 (0.2,+∞) | | | | |
|^^|输入长度 (32, 128] |1.20 |0.017 |0.16 |16.00 |
|^^|输入长度 (128, 256] |2.40 |0.017 |0.16 |24.00 |
|doubao\-seed\-1.6\-lite |输入长度 [0, 32]|0.30 |0.017 |0.06 |0.60 |\
| |且输出长度 [0, 0.2] | | | | |
|^^|输入长度 [0, 32]|0.30 |0.017 |0.06 |2.40 |\
| |且输出长度 (0.2,+∞) | | | | |
|^^|输入长度 (32, 128] |0.60 |0.017 |0.06 |4.00 |
|^^|输入长度 (128, 256] |1.20 |0.017 |0.06 |12.00 |
|doubao\-seed\-1.6\-flash |输入长度 [0, 32] |0.15 |0.017 |0.03 |1.50 |
|^^|输入长度 (32, 128] |0.30 |0.017 |0.03 |3.00 |
|^^|输入长度 (128, 256] |0.60 |0.017 |0.03 |6.00 |
|doubao\-seed\-1.6\-vision |输入长度 [0, 32] |0.80 |0.017 |0.16 |8.00 |
|^^|输入长度 (32, 128] |1.20 |0.017 |0.16 |16.00 |
|^^|输入长度 (128, 256] |2.40 |0.017 |0.16 |24.00 |
|doubao\-seed\-translation |\- |1.20 |不支持 |不支持 |3.60 |
|doubao\-1.5\-pro\-32k |\- |0.80 |0.017 |0.16 |2.00 |
|doubao\-1.5\-lite\-32k |\- |0.30 |0.017 |0.06 |0.60 |
|doubao\-1.5\-vision\-pro |\- |3.00 |不支持 |不支持 |9.00 |
|glm\-4.7 |输入长度 [0, 32]|2.0 |0.017 |0.4 |8.0 |\
| |且输出长度 [0, 0.2] | | | | |
|^^|输入长度 [0, 32]|3.0 |0.017 |0.6 |14.0 |\
| |且输出长度 (0.2,+∞) | | | | |
|^^|输入长度 (32, 200] |4.0 |0.017 |0.8 |16.0 |
|deepseek\-v3.2 |输入长度 [0, 32] |2.00 |0.017 |0.4 |3.00 |
|^^|输入长度 (32, 128] |4.00 |0.017 |0.4 |6.00 |
|deepseek\-v3.1 |\- |4.00 |0.017 |0.80 |12.00 |
|deepseek\-v3 |\- |2.00 |0.017 |0.40 |8.00 |
|deepseek\-r1 |\- |4.00 |0.017 |0.80 |16.00 |
> * 按 token 后付费,计算公式:
> * `在线推理费用 = 输入单价 × 输入token + 缓存输入单价 × 缓存命中token + 缓存存储单价 × 缓存存储token × 时长 + 输出单价 × 输出token`
> * 分段计费部分模型适用不同的输入长度和输出长度token单价不同
> * 举例:请求输入 200k tokens输出 14k tokens满足 **输入长度 (128, 256]** 条件,模型输入输出 token 按照:输入 2.4 元/百万 token输出 24 元/百万 token 单价计费。
> * 常见问题: [如何查看历史调用的输入输出长度的区间分布?](/docs/82379/1359411#fba666f2)
<span id="d3774bbd"></span>
## 在线推理(低延迟)
<span aceTableMode="list" aceTableWidth="3,2,1,1,1"></span>
|模型名称 |条件|输入|缓存输入|输出|\
| |千 token |元/百万token |元/百万token |元/百万token |
|---|---|---|---|---|
|doubao\-seed\-2.0\-pro |输入长度 [0, 32] |9.6 |1.92 |48.0 |
|^^|输入长度 (32, 128] |14.4 |2.88 |72.0 |
|^^|输入长度 (128, 256] |28.8 |5.76 |144.0 |
|doubao\-seed\-2.0\-lite |输入长度 [0, 32] |1.2 |0.24 |7.2 |
|^^|输入长度 (32, 128] |1.8 |0.36 |10.8 |
|^^|输入长度 (128, 256] |3.6 |0.72 |21.6 |
|doubao\-seed\-2.0\-mini |输入长度 [0, 32] |0.4 |0.08 |4.0 |
|^^|输入长度 (32, 128] |0.8 |0.16 |8.0 |
|^^|输入长度 (128, 256] |1.6 |0.32 |16.0 |
<span id="952683a2"></span>
## 在线推理TPM 保障包)
<span aceTableMode="list" aceTableWidth="3,2,2,2"></span>
|模型 |计费方式 |输入|输出|\
| | |元/每10K TPM |元/每1K TPM |
|---|---|---|---|
|doubao\-seed\-1.8 |按购买时长后付费 |1.920 |0.480 |
|^^|包天预付费 |23.040 |5.760 |
|doubao\-seed\-1.6 |按购买时长后付费 |1.920 |0.480 |
|^^|包天预付费 |23.040 |5.760 |
|doubao\-seed\-1.6\-vision |按购买时长后付费 |1.920 |0.480 |
|^^|包天预付费 |23.040 |5.760 |
|doubao\-seed\-1.6\-flash|按购买时长后付费 |0.360 |0.360 |\
|> 0615版本不支持 | | | |
|^^|包天预付费 |4.320 |4.320 |
|doubao\-1.5\-vision\-pro |按购买时长后付费 |7.200 |2.160 |
|^^|包天预付费 |86.400 |25.920 |
|doubao\-1.5\-pro\-32k|按购买时长后付费 |1.920 |0.480 |\
|> 包含 character\-250715 版本 | | | |
|^^|包天预付费 |23.040 |5.760 |
|doubao\-1.5\-lite\-32k |按购买时长后付费 |0.72 |0.144 |
|^^|包天预付费 |8.64 |1.728 |
|doubao\-pro\-32k |按购买时长后付费 |1.920 |0.480 |
|^^|包天预付费 |23.040 |5.760 |
|deepseek\-v3.2 |按购买时长后付费 |7.2 |1.08 |
|^^|包天预付费 |86.4 |12.96 |
|deepseek\-v3.1 |按购买时长后付费 |9.60 |2.88 |
|^^|包天预付费 |115.20 |34.56 |
|deepseek\-v3 |按购买时长后付费 |4.80 |1.92 |
|^^|包天预付费 |57.60 |23.04 |
|deepseek\-r1 |按购买时长后付费 |9.60 |3.84 |
|^^|包天预付费 |115.20 |46.08 |
> * 相比普通的按token计费模式TPM保障包具备更高并发更低的延迟更强稳定性。支持的模型以[接入点创建页](https://console.volcengine.com/ark/region:ark+cn-beijing/endpoint/create)可选的付费方式为准。
> * 支持「按购买时长后付费」和「包天预付费」两种方式叠加购买,可灵活组合。
> * **doubao\-seed\-1.6 系列及之后模型deepseek\-v3.2 模型,不同长度请求抵扣 TPM 速度不同**,可通过 TPM 计算器查看相应的抵扣系数,估算实际需购买的**可抵扣TPM**。
<span id="a6471f38"></span>
## 批量推理
<span aceTableMode="list" aceTableWidth="3,2,1,1,2"></span>
|模型名称 |条件|输入|缓存命中|输出|\
| |千 token |元/百万token |元/百万token |元/百万token |
|---|---|---|---|---|
|doubao\-seed\-2.0\-pro |输入长度 [0, 32] |1.6 |0.64 |8.0 |
|^^|输入长度 (32, 128] |2.4 |0.96 |12.0 |
|^^|输入长度 (128, 256] |4.8 |1.92 |24.0 |
|doubao\-seed\-2.0\-lite |输入长度 [0, 32] |0.3 |0.12 |1.8 |
|^^|输入长度 (32, 128] |0.45 |0.18 |2.7 |
|^^|输入长度 (128, 256] |0.9 |0.36 |5.4 |
|doubao\-seed\-2.0\-mini |输入长度 [0, 32] |0.1 |0.04 |1.0 |
|^^|输入长度 (32, 128] |0.2 |0.08 |2.0 |
|^^|输入长度 (128, 256] |0.4 |0.16 |4.0 |
|doubao\-seed\-2.0\-code |输入长度 [0, 32] |1.6 |0.64 |8.0 |
|^^|输入长度 (32, 128] |2.4 |0.96 |12.0 |
|^^|输入长度 (128, 256] |4.8 |1.92 |24.0 |
|doubao\-seed\-1.8 |输入长度 [0, 32]|0.40 |0.16 |1.00 |\
| |且输出长度 [0, 0.2] | | | |
|^^|输入长度 [0, 32]|0.40 |0.16 |4.00 |\
| |且输出长度 (0.2,+∞) | | | |
|^^|输入长度 (32, 128] |0.60 |0.16 |8.00 |
|^^|输入长度 (128, 256] |1.20 |0.16 |12.00 |
|doubao\-seed\-1.6\-vision |输入长度 [0, 32] |0.40 |0.16 |4.00 |
|^^|输入长度 (32, 128] |0.60 |0.16 |8.00 |
|^^|输入长度 (128, 256] |1.20 |0.16 |12.00 |
|doubao\-seed\-1.6\-lite |输入长度 [0, 32]|0.15 |0.06 |0.30 |\
| |且输出长度 [0, 0.2] | | | |
|^^|输入长度 [0, 32]|0.15 |0.06 |1.20 |\
| |且输出长度 (0.2,+∞) | | | |
|^^|输入长度 (32, 128] |0.30 |0.06 |2.00 |
|^^|输入长度 (128, 256] |0.60 |0.06 |6.00 |
|doubao\-seed\-1.6 |输入长度 [0, 32]|0.40 |0.16 |1.00 |\
| |且输出长度 [0, 0.2] | | | |
|^^|输入长度 [0, 32]|0.40 |0.16 |4.00 |\
| |且输出长度 (0.2,+∞) | | | |
|^^|输入长度 (32, 128] |0.60 |0.16 |8.00 |
|^^|输入长度 (128, 256] |1.20 |0.16 |12.00 |
|doubao\-seed\-1.6\-flash |输入长度 [0, 32] |0.075 |0.03 |0.75 |
|^^|输入长度 (32, 128] |0.150 |0.03 |1.50 |
|^^|输入长度 (128, 256] |0.300 |0.03 |3.00 |
|doubao\-seed\-translation |\- |0.60 |0.24 |1.80 |
|doubao\-1.5\-pro\-32k |\- |0.40 |0.16 |1.00 |
|doubao\-1.5\-lite\-32k |\- |0.15 |0.06 |0.30 |
|doubao\-pro\-32k |\- |0.80 |0.16 |2.00 |
|deepseek\-v3.2 |输入长度 [0, 32] |1.00 |0.40 |1.50 |
|^^|输入长度 (32, 128] |2.00 |0.40 |3.00 |
|deepseek\-v3.1 |\- |2.00 |0.80 |6.00 |
|deepseek\-v3 |\- |1.00 |0.40 |4.00 |
|deepseek\-r1 |\- |2.00 |0.80 |8.00 |
> * 按 token 后付费,计算公式:`批量推理费用 = 输入单价 × 输入token + 缓存命中单价 × 缓存命中token + 输出单价 × 输出token`
> * 部分模型已支持透明前缀缓存能力,无需任何配置,享受命中缓存后的更低单价。
> * doubao\-seed\-1.6 系列支持分段计费,即根据每次请求的输入及输出长度,采用不同 token 单价。
> * 举例:当某次请求的输入长度为 200k输出长度为 14k 时,满足 **输入长度 (128, 256]** 条件,模型产生的所有 token 按照输入2.4 元/百万 token输出 24 元/百万 token 单价计费。
> * 查看往期调用的输入输出长度分布,请查看常见问题 [如何查看历史调用的输入输出长度的区间分布?](/docs/82379/1359411#fba666f2)
<span id="02affcb8"></span>
# 视频生成模型
<span id="2864f00a"></span>
## 按token单价
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|模型 |在线推理|离线推理|\
| |元/百万token |元/百万token |
|---|---|---|
|doubao\-seedance\-2.0|* 输出视频分辨率为 480p720p|暂不支持 |\
|> 按输出视频分辨率和输入是否包含视频区分定价 | * 输入不含视频46.00| |\
| | * 输入包含视频28.00| |\
| |* 输出视频分辨率为 1080p| |\
| | * 输入不含视频51.00| |\
| | * 输入包含视频31.00 | |
|doubao\-seedance\-2.0\-fast|* 输入不含视频37.00|暂不支持 |\
|> 按输入是否包含视频区分定价|* 输入包含视频22.00 | |\
|> 不支持输出 1080p 视频 | | |
|doubao\-seedance\-1.5\-pro|* 有声视频16.00|* 有声视频8.00|\
|> 按输出视频是否包含声音区分定价 |* 无声视频8.00 |* 无声视频4.00 |
|doubao\-seedance\-1.0\-pro |15.00 |7.50 |
|doubao\-seedance\-1.0\-pro\-fast |4.20 |2.10 |
|doubao\-seedance\-1.0\-lite |10.00 |5.00 |
> * 仅对成功生成的视频计费。因审核等原因导致生成失败的,不收取费用。
> * 视频价格估算公式:`按 token 单价 × token 用量`
> * 正常视频 token 用量估算:`(输入视频时长+输出视频时长) × 输出视频的宽 × 输出视频的高 × 输出视频的帧率/1024`,注意存在输入视频时, Seedance 2.0 和 Seedance 2.0 fast 模型针对不同的视频输出时长存在最低 Token 用量限制,详见下文表格。
> * Draft 视频仅480ptoken 用量估算:`正常视频 token 用量公式 × 折算系数`折算系数与模型相关Seedance 1.5 pro 的 token 折算系数:无声 0.7;有声 0.6,其他模型暂不支持。
> * 准确 token 用量:以调用 API 后返回信息中的 usage 字段为准。
<span id="2653dbb3"></span>
## 价格示例
基于 token 用量公式估算的视频单价,方便您直观了解不同规格的视频成本。更多价格示例请参见[火山方舟视频生成模型价格快查表](https://bytedance.larkoffice.com/wiki/FXaYwxzJ5i5Zdik32ipcWzt7nxd?table=tblns3WjGMNbR8sL&view=vewPa39Do4#CategoryScheduledTask)。
<span id="83af2aad"></span>
### doubao\-seedance\-2.0 & 2.0 fast
> * 视频价格估算公式:`按 token 单价 × token 用量`=`按 token 单价 × (输入视频时长+输出视频时长) × 输出视频的宽 × 输出视频的高 × 输出视频的帧率/1024`
> * 注意:输入包含视频时, Seedance 2.0 和 Seedance 2.0 fast 模型针对不同的视频输出时长存在最低 token 用量限制,如果 token 估算用量 最低 token 用量限制,则按最低 token 用量计算视频价格。
* **输入不含视频**
<span aceTableMode="list" aceTableWidth="2,2,3,4,4"></span>
|分辨率 |宽高比 |输出视频时长(秒) |doubao\-seedance\-2.0|doubao\-seedance\-2.0\-fast|\
| | | |视频价格(元/个) |视频价格(元/个) |
|---|---|---|---|---|
|480p |16:9 |5 |2.31 |1.86 |
|720p |16:9 |5 |4.97 |4.00 |
|1080p |16:9 |5 |12.39 |不支持 |
* **输入包含视频**
<span aceTableMode="list" aceTableWidth="2,2,3,3,4,4"></span>
|分辨率 |宽高比 |输入视频时长(秒) |输出视频时长(秒) |doubao\-seedance\-2.0|doubao\-seedance\-2.0\-fast|\
| | | | |视频价格(元/个) |视频价格(元/个) |
|---|---|---|---|---|---|
|480p |16:9 |2~15 |5 |2.53~5.62|1.99~4.42|\
| | | | |> 最低价对应输入2~4秒|> 最低价对应输入2~4秒|\
| | | | |> 最高价对应输入15秒 |> 最高价对应输入15秒 |
|720p |16:9 |2~15 |5 |5.44~12.10|4.28~9.50|\
| | | | |> 最低价对应输入2~4秒|> 最低价对应输入2~4秒|\
| | | | |> 最高价对应输入15秒 |> 最高价对应输入15秒 |
|1080p |16:9 |215 |5 |13.56~30.13|不支持 |\
| | | | |> 最低价对应输入2~4秒| |\
| | | | |> 最高价对应输入15秒 | |
输入包含视频时Seedance 2.0 & 2.0 fast 的最低 token 用量限制。本表以 16:9 宽高比为例展示各分辨率下的最低 token 用量。不同宽高比的最低 token 用量存在少许差异,详情参见 [火山方舟视频生成模型价格快查表](https://bytedance.larkoffice.com/wiki/FXaYwxzJ5i5Zdik32ipcWzt7nxd?table=tblmNCuMjADrXtDf&view=vewPa39Do4#CategoryScheduledTask)。
<span aceTableMode="list" aceTableWidth="3,3,3,3"></span>
|输出视频秒数 |最低tokens\-480P |最低tokens\-720P |最低tokens\-1080P |
|---|---|---|---|
|4 |70308 |151200 |340200 |
|5 |90396 |194400 |437400 |
|6 |100440 |216000 |486000 |
|7 |120528 |259200 |583200 |
|8 |140616 |302400 |680400 |
|9 |150660 |324000 |729000 |
|10 |170748 |367200 |826200 |
|11 |190836 |410400 |923400 |
|12 |200880 |432000 |972000 |
|13 |220968 |475200 |1069200 |
|14 |241056 |518400 |1166400 |
|15 |251100 |540000 |1215000 |
<span id="dd571290"></span>
### doubao\-seedance\-1.5\-pro
<span aceTableMode="list" aceTableWidth="2,2,2,3,3,3,3"></span>
|分辨率 |宽高比 |时长(秒) |有声视频|Draft 有声|无声视频|Draft无声|\
| | | |价格|视频价格|价格|视频价格|\
| | | |(元/个) |(元/个) |(元/个) |(元/个) |
|---|---|---|---|---|---|---|
|480p |16:9 |5 |0.80 |0.48 |0.40 |0.28 |
|720p |16:9 |5 |1.73 |不支持 |0.86 |不支持 |
|1080p |16:9 |5 |3.89 |不支持 |1.94 |不支持 |
<span id="457edfd0"></span>
# 图片生成模型
<span aceTableMode="list" aceTableWidth="3,6"></span>
|模型名称 |单价|\
| |元/张 |
|---|---|
|doubao\-seedream\-5.0\-lite |0.22 |
|doubao\-seedream\-4.5 |0.25 |
|doubao\-seedream\-4.0 |0.2 |
|doubao\-seedream\-3.0\-t2i |0.259 |
> * 按成功输出图片数量计费:
> * 组图场景按实际生成的图片数量计费。
> * 因审核等原因未成功输出的图片不计费。
&nbsp;
<span id="e68ea83c"></span>
# 向量模型
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|模型 |文本输入|图片输入|\
| |元/百万 token |元/百万 token |
|---|---|---|
|doubao\-embedding\-vision |0.70 |1.80 |
> 按输入的 tokens 计费:
> 费用 = `文本输入 tokens × 文本输入单价 + 图片输入 tokens × 图片输入单价`
> = `文本输入 tokens × 文本输入单价+ min((width × height)/7841312 ) × 图片输入单价`
<span id="b3a42676"></span>
# 模型精调
<span id="7e451788"></span>
## 精调\-按 token 后付费
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|基础模型 ID |LoRA精调|全量精调|\
| |元/百万token |元/百万token |
|---|---|---|
|doubao\-seed\-1.6 |40 |80 |
|doubao\-seed\-1.6\-flash |7 |14 |
|doubao\-1\-5\-pro\-32k\-250115 |50 |100 |
|doubao\-1\-5\-lite\-32k\-250115 |30 |60 |
> 训练费用 = 总 token 数 x 精调单价 =用户训练集token数+混入token数+验证集token数x 迭代轮次 x 精调token单价
> * 若 token 数小于 1000将会上取整为 1000 tokens 计算。
<span id="b2811e92"></span>
## 精调\-按算力付费
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|算力规格 |计费方式 |定价|\
| | |元/小时 |
|---|---|---|
|方舟A型模型单元 |按量后付费 |25 |
|方舟B型模型单元 |按量后付费 |15 |
|方舟C型模型单元 |按量后付费 |10 |
|方舟D型模型单元 |按量后付费 |20 |
> 训练费用=训练计费时长*使用的模版单价=训练计费时长*模型单元数\*模型单元单价。
<span id="c6d128f7"></span>
## 推理\-在线推理
<span aceTableMode="list" aceTableWidth="3,2,2,2"></span>
|精调模型对应的基础模型 |条件(千 token |输入|输出|\
| | |元/百万token |元/百万token |
|---|---|---|---|
|doubao\-seed\-1.6 |输入长度 [0, 32] |1.60 |16.00 |
|^^|输入长度 (32, 128] |2.40 |32.00 |
|doubao\-seed\-1.6\-flash |输入长度 [0, 32] |0.30 |3.00 |
|^^|输入长度 (32, 128] |0.60 |6.00 |
|doubao\-1.5\-pro\-32k |\- |2.00 |5.00 |
|doubao\-1.5\-lite\-32k |\- |0.75 |1.50 |
|doubao\-pro\-32k |\- |0.80 |2.00 |
> 按 token 后付费价格,仅部分 doubao 模型在精调后支持按 token 付费,以[接入点创建页](https://console.volcengine.com/ark/region:ark+cn-beijing/endpoint/create)可选的付费方式为准。
<span id="0c211d41"></span>
## 推理\-批量推理
<span aceTableMode="list" aceTableWidth="3,2,1,1,2"></span>
|精调模型对应的基础模型 |条件(千 token |输入|缓存命中|输出|\
| | |元/百万token |元/百万token |元/百万token |
|---|---|---|---|---|
|doubao\-seed\-1.6 |输入长度 [0, 32] |0.40 |0.16 |4.00 |
|^^|输入长度 (32, 128] |0.60 |0.16 |8.00 |
|^^|输入长度 (128, 256] |1.20 |0.16 |12.00 |
|doubao\-seed\-1.6\-flash |输入长度 [0, 32] |0.075 |0.03 |0.75 |
|^^|输入长度 (32, 128] |0.15 |0.03 |1.50 |
|^^|输入长度 (128, 256] |0.30 |0.03 |3.00 |
|doubao\-1.5\-pro\-32k |\- |0.40 |0.16 |1.00 |
|doubao\-1.5\-lite\-32k |\- |0.15 |0.06 |0.30 |
|doubao\-pro\-32k |\- |0.80 |0.16 |2.00 |
> 按token后付费相比在线推理价格低至50%。
<span id="c26435c9"></span>
# 模型单元
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|机型 |计费方式 |定价|\
| | |元/个 |
|---|---|---|
|方舟A型模型单元 |按购买时长后付费 |25.00 |
|^^|包月预付费 |16700.00 |
|方舟B型模型单元 |按购买时长后付费 |15.00 |
|^^|包月预付费 |10400.00 |
|方舟C型模型单元 |按购买时长后付费 |10.00 |
|^^|包月预付费 |7100.00 |
|方舟D型模型单元 |按购买时长后付费 |20.00 |
|^^|包月预付费 |12800.00 |
> 支持「按购买时长后付费」和「包月预付费」两种方式叠加购买,可灵活组合。
> **提供** [单元计算器](https://console.volcengine.com/ark/region:ark+cn-beijing/endpoint/create) 估算需要的机型数量。更推荐通过实际业务流量压测,计算需要的机型和数量。
<span id="3adb5876"></span>
# 工具及插件
<span id="f2e7c4f6"></span>
## 联网内容插件
<span aceTableMode="list" aceTableWidth="3,2,4"></span>
|服务项 |价格|说明 |\
| |元/千次 | |
|---|---|---|
|联网资源 |4 |实时搜索互联网公开域内容每月提供2万次免费额度。 |
|头条资源 |6 |实时搜索今日头条图文内容,并提供内容详情信息供展示交互卡片。 |
|抖音资源 |6 |实时搜索抖音百科内容,并提供内容详情信息供展示交互卡片。 |
|墨迹天气 |6 |实时搜索墨迹天气内容资源。 |
> * 出账及计费:按量后付费
> * 用量:每次请求产生的调用次数,可返回结构体的 **source_type** 字段计算得到。
> * 更多说明请参见 [联网内容插件功能说明](/docs/82379/1338552)。
<span id="abf4f1e8"></span>
## 豆包助手
<span aceTableMode="list" aceTableWidth="3,2,4"></span>
|服务项 |价格|说明 |\
| |元/次 | |
|---|---|---|
|日常沟通 |0.1 |全能助手,自然交流,多轮对话,高情商人格化聊天。 |
|深度沟通 |0.2 |深度理解,精准解析,先思考再回答,复杂问题尽在掌握。 |
|联网搜索 |0.2 |全网搜索,信源丰富,无需费力找资料,一键搜索实时资讯。 |
|边想边搜 |0.5 |逻辑缜密,深度洞察,遇难题问豆包,想得更深,答得更准。 |
> * 出账及计费:按量后付费
> * 用量:每次请求产生的调用次数,可返回结构体的 **source_type** 字段计算得到。
> * 更多说明请参见 [联网内容插件功能说明](/docs/82379/1338552)。
<span id="bce8c602"></span>
## 知识库
<span aceTableMode="list" aceTableWidth="6,3"></span>
|服务项 |价格 |
|---|---|
|计算资源\-知识库【旗舰版】 |0.45 元/CU/小时 |
|离线存储资源\-知识库【旗舰版】 |0.0015 元/GB/小时 |
|标准计算资源\-知识库【标准版】 |0.0416 元/知识库/小时 |
|文本向量模型\-知识库【通用】 |0.0005 元/千token |
|文本向量模型(多功能版)\-知识库【通用】 |0.0005 元/千token |
|文本向量模型Doubao\-embedding\-知识库【通用】 |0.0005 元/千token |
|文本向量模型Doubao\- embedding\-large\-知识库【通用】 |0.0007 元/千token |
|多模态向量模型Doubao\-embedding\-vision\-text\-知识库【通用】 |0.0007 元/千token |
|多模态向量模型Doubao\-embedding\-vision\-image\-知识库【通用】 |0.0018 元/千token |
|重排模型\-知识库【通用】 |0.0005 元/千token |
> 更多说明请参见 [知识库计费](/docs/82379/1263336)。
<span id="f47e6c9b"></span>
# Coding Plan 个人版
<span aceTableMode="list" aceTableWidth="3,3,3"></span>
|套餐类型 |订阅时长 |价格 |
|---|---|---|
|Lite 套餐 |1 个月 |40 元/月 |
|^^|3 个月 |120 元/季 |
|Pro 套餐 |1 个月 |200 元/月 |
|^^|3 个月 |600 元/季 |
> 套餐信息及特惠活动参见[套餐概览](/docs/82379/1925114)。

View File

@ -0,0 +1,648 @@
`POST https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks` [ ](https://api.volcengine.com/api-explorer/?action=CreateContentsGenerationsTasks&data=%7B%7D&groupName=%E8%A7%86%E9%A2%91%E7%94%9F%E6%88%90API&query=%7B%7D&serviceCode=ark&version=2024-01-01)[运行](https://api.volcengine.com/api-explorer/?action=CreateContentsGenerationsTasks&data=%7B%7D&groupName=%E8%A7%86%E9%A2%91%E7%94%9F%E6%88%90API&query=%7B%7D&serviceCode=ark&version=2024-01-01)
本文介绍创建视频生成任务 API 的输入输出参数,供您使用接口时查阅字段含义。模型会依据传入的图片及文本信息生成视频,待生成完成后,您可以按条件查询任务并获取生成的视频。
:::tip
请确保您的账户余额大于等于 200 元([前往充值](https://console.volcengine.com/finance/fund/recharge)),或已[购买资源包](https://console.volcengine.com/common-buy/fast/ark_bd%7C%7Cd682ppeeq1mp7kd5q0e0),否则无法开通 seedance 2.0 及 seedance 2.0 fast 模型。
:::
**模型能力==^new^==**
* **seedance 2.0 & 2.0 fast==^new^==** ** (有声视频/无声视频)**
* **多模态参考生视频==^new^==**:输入++参考图片0~9+参考视频0~3+ 参考音频0~3+ 文本提示词(可选)++ 生成 1 个目标视频。注意不可单独输入音频,应至少包含 1 个参考视频或图片。支持生成全新视频、编辑视频、延长视频,[阅读教程](https://www.volcengine.com/docs/82379/2291680) 获取详细代码示例。
* **图生视频\-首尾帧**:输入++首帧图片+尾帧图片+文本提示词(可选)++ 生成 1 个目标视频。
* **图生视频\-首帧**:输入++首帧图片+文本提示词(可选)++ 生成 1 个目标视频。
* **文生视频**:输入++文本提示词++生成 1 个目标视频。
* **seedance 1.5 pro (有声视频/无声视频)**
【图生视频\-首尾帧】【图生视频\-首帧】【文生视频】
* **seedance 1.0 pro**
【图生视频\-首尾帧】【图生视频\-首帧】【文生视频】
* **seedance 1.0 pro fast**
【图生视频\-首帧】【文生视频】
* **seedance 1.0 lite**
* **doubao\-seedance\-1\-0\-lite\-t2v** 文生视频
* **doubao\-seedance\-1\-0\-lite\-i2v**
* 参考图生视频:根据您输入的**++参考图片1\-4张++ ** +++文本提示词(可选)++ 生成 1 个目标视频。
* 图生视频\-首尾帧
* 图生视频\-首帧
Tips一键展开折叠快速检索内容
打开页面右上角开关,**ctrl ** + **f** 可检索页面内所有内容。
<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_cae7ddb0e1977b68b353f17897b8574c.png) </span>
```mixin-react
return (<Tabs>
<Tabs.TabPane title="在线调试" key="4rK5FhUg"><RenderMd content={`<APILink link="https://api.volcengine.com/api-explorer/?action=CreateContentsGenerationsTasks&data=%7B%7D&groupName=%E8%A7%86%E9%A2%91%E7%94%9F%E6%88%90API&query=%7B%7D&serviceCode=ark&version=2024-01-01" description="API Explorer 您可以通过 API Explorer 在线发起调用,无需关注签名生成过程,快速获取调用结果。"></APILink>
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="鉴权说明" key="iRuPtuk6"><RenderMd content={`本接口仅支持 API Key 鉴权请在 [获取 API Key](https://console.volcengine.com/ark/region:ark+cn-beijing/apiKey) 页面获取长效 API Key
`}></RenderMd></Tabs.TabPane>
<Tabs.TabPane title="快速入口" key="5LZLMN0J"><RenderMd content={` [ ](#)[体验中心](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_2abecd05ca2779567c6d32f0ddc7874d.png =20x) </span>[模型列表](https://www.volcengine.com/docs/82379/1330310?lang=zh#2705b333) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_a5fdd3028d35cc512a10bd71b982b6eb.png =20x) </span>[模型计费](https://www.volcengine.com/docs/82379/1544106?redirect=1&lang=zh#02affcb8) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_afbcf38bdec05c05089d5de5c3fd8fc8.png =20x) </span>[API Key](https://console.volcengine.com/ark/region:ark+cn-beijing/apiKey?apikey=%7B%7D)
<span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_57d0bca8e0d122ab1191b40101b5df75.png =20x) </span>[调用教程](https://www.volcengine.com/docs/82379/1366799) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_f45b5cd5863d1eed3bc3c81b9af54407.png =20x) </span>[接口文档](https://www.volcengine.com/docs/82379/1520758) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_1609c71a747f84df24be1e6421ce58f0.png =20x) </span>[常见问题](https://www.volcengine.com/docs/82379/1359411) <span>![图片](https://portal.volccdn.com/obj/volcfe/cloud-universal-doc/upload_bef4bc3de3535ee19d0c5d6c37b0ffdd.png =20x) </span>[开通模型](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false)
`}></RenderMd></Tabs.TabPane></Tabs>);
```
---
<span id="5qndT7DS"></span>
## 请求参数
> 跳转 [响应参数](#y2hhTyHB)
<span id="wsGzv1pD"></span>
### 请求体
---
**model** `string` %%require%%
您需要调用的模型的 ID Model ID[开通模型服务](https://console.volcengine.com/ark/region:ark+cn-beijing/openManagement?LLM=%7B%7D&OpenTokenDrawer=false),并[查询 Model ID](https://www.volcengine.com/docs/82379/1330310) 。
您也可通过 Endpoint ID 来调用模型,获得限流、计费类型(前付费/后付费)、运行状态查询、监控、安全等高级能力,可参考[获取 Endpoint ID](https://www.volcengine.com/docs/82379/1099522)。
---
**content** `object[]` %%require%%
输入给模型,生成视频的信息,支持文本、图片、音频、视频、样片任务 ID。
:::warning
seedance 2.0 系列模型不支持直接上传含有真人人脸的参考图/视频。为了便利创作者对肖像的使用,平台推出了以下解决方案,详情参见 [教程](https://www.volcengine.com/docs/82379/2291680?lang=zh#5c67c9a1)。
* 支持使用部分模型的含人脸原始产物作为输入素材
* 支持使用预置虚拟人像作为输入素材
* 支持使用已授权真人素材作为输入
:::
支持以下几种组合:
* **文本**
* **文本(可选)+ 图片**
* **文本(可选)+ 视频**
* **文本(可选)+ 图片 + 音频**
* **文本(可选)+ 图片 + 视频**
* **文本(可选)+ 视频 + 音频**
* **文本(可选)+ 图片 + 视频 + 音频**
* **样片任务 ID**:样片指使用 seedance 模型成功生成的样片视频,模型可基于样片生成高质量正式视频。
信息类型
---
**文本信息** `object`
输入给模型的提示词信息。
属性
---
content.**type ** `string` %%require%%
输入内容的类型,此处应为 `text`
---
content.**text ** `string` %%require%%
输入给模型的文本提示词,描述期望生成的视频。
:::tip
* 提示词语言支持所有模型均支持中英文提示词seedance 2.0 及 seedance 2.0 fast 额外支持日语、印尼语、西班牙语、葡萄牙语。
* 提示词字数建议中文提示词不超过500字英文提示词不超过1000词。字数过多易导致信息分散模型可能忽略细节、仅关注重点进而造成视频缺失部分元素。
* 更多使用技巧:提示词的详细使用技巧,请参见 [seedance 提示词指南](https://www.volcengine.com/docs/82379/2222480?lang=zh)。
:::
---
**图片信息==^new^==** `object`
输入给模型的图片信息。
属性
---
content.**type ** `string` %%require%%
输入内容的类型,此处应为 `image_url`
---
content.**image_url ** `object` %%require%%
输入给模型的图片对象。
属性
---
content.image_url.**url ** `string` %%require%%
图片 URL 、图片 Base64 编码、素材 ID。
* 图片 URL填入图片的公网 URL。
* Base64 编码:将本地文件转换为 Base64 编码字符串,然后提交给大模型。遵循格式:`data:image/<图片格式>;base64,<Base64编码>`,注意 `<图片格式>` 需小写,如 `data:image/png;base64,{base64_image}`
* 素材 ID用于视频生成的预置素材及虚拟人像的 ID遵循格式asset://<ASSET_ID\>可从 [素材&虚拟人像库](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seedance-2-0-260128) 获取
:::tip 传入单张图片要求
* 格式jpeg、png、webp、bmp、tiff、gif。其中seedance 1.5 pro 新增支持 heic 和 heif。
* 宽高比(宽/高): (0.4, 2.5)
* 宽高长度px(300, 6000)
* 大小:单张图片小于 30 MB。请求体大小不超过 64 MB。大文件请勿使用Base64编码。
* 图片数量:
* 图生视频\-首帧1 张
* 图生视频\-首尾帧2 张
* seedance 2.0&2.0 fast 多模态参考生视频1~9 张
* seedance 1.0 lite 参考图生视频1~4 张
:::
---
content.**role ** `string` `条件必填`
图片的位置或用途。
:::warning
* **图生视频\-首帧**、**图生视频\-首尾帧**、**多模态参考生视频**(包括参考图、视频、音频)为 3 种互斥场景,**不可混用**。
* **多模态参考生视频**可通过提示词指定参考图片作为首帧/尾帧,间接实现“首尾帧+多模态参考”效果。若需严格保障首尾帧和指定图片一致,**优先使用图生视频\-首尾帧**(配置 role 为 first_frame/last_frame
:::
图生视频\-首帧
* **支持模型:** 所有图生视频模型
* **字段role取值** 需要传入1个 image_url 对象,字段 role 为 first_frame 或不填。
图生视频\-首尾帧
* **支持模型:** seedance 2.0 & 2.0 fastseedance 1.5 pro、seedance 1.0 pro、seedance 1.0 lite i2v
* **字段role取值** 需要传入2个image_url对象且字段 role 必填。
* 首帧图片对应的字段 role 为first_frame
* 尾帧图片对应的字段 role 为last_frame
:::tip
传入的首尾帧图片可相同。首尾帧图片的宽高比不一致时,以首帧图片为主,尾帧图片会自动裁剪适配。
:::
图生视频\-参考图
* **支持模型:** seedance 2.0 & 2.0 fast1~9 张图片seedance 1.0 lite i2v1~4 张图片)
* **字段role取值** 必填,每张参考图对应的字段 role 均为reference_image
:::tip
参考图生视频功能的文本提示词,可以用自然语言指定多张图片的组合。但若想有更好的指令遵循效果,**推荐使用“[图1]xxx[图2]xxx”的方式来指定图片**。
示例1戴着眼镜穿着蓝色T恤的男生和柯基小狗坐在草坪上3D卡通风格
示例2[图1]戴着眼镜穿着蓝色T恤的男生和[图2]的柯基小狗,坐在[图3]的草坪上3D卡通风格
:::
---
**视频信息==^new^==** `object`
输入给模型的视频信息。仅 seedance 2.0 & 2.0 fast 支持输入视频。
方舟平台信任 seedance 2.0 及 2.0 fast 模型生成的含人脸视频,您可使用**本账号下近30天内由上述模型生成的含人脸原始视频**,作为输入素材进行二次创作,详情参见 [教程](https://www.volcengine.com/docs/82379/2291680?lang=zh#86c3831f)。
属性
content.**type ** `string` %%require%%
输入内容的类型,此处应为`video_url`
---
content.**video_url** ** ** `object` %%require%%
输入给模型的视频对象。
属性
content.video_url.**url ** `string` %%require%%
视频URL、素材 ID。
* 视频 URL填入视频的公网 URL。
* 素材 ID用于视频生成的预置素材及虚拟人像视频的 ID遵循格式asset://<ASSET_ID\>可从[素材&虚拟人像库](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seedance-2-0-260128)获取
:::tip 传入单个视频要求
* 视频格式mp4、mov支持编码格式见下表。
* 分辨率480p720p1080p
* 时长:单个视频时长 [2, 15] s最多传入 3 个参考视频,所有视频总时长不超过 15s。
* 尺寸:
* 宽高比(宽/高):[0.4, 2.5]
* 宽高长度px[300, 6000]
* 总像素数:[640×640=409600, 2206×946=2086876],即宽和高的乘积符合 [409600, 2086876] 的区间要求。
* 大小:单个视频不超过 50 MB。
* 帧率 (FPS)[24, 60]
:::
---
content.**role ** `string` `条件必填`
视频的位置或用途。当前仅支持 reference_video参考视频。
---
**音频信息==^new^==** `object`
输入给模型的音频信息。仅 seedance 2.0&2.0 fast 支持输入音频。
注意不可单独输入音频,应至少包含 1 个参考视频或图片。
属性
content.**type ** `string` %%require%%
输入内容的类型,此处应为`audio_url`
---
content.**audio_url** ** ** `object` %%require%%
输入给模型的音频对象。
属性
content.audio_url.**url ** `string` %%require%%
音频 URL 、音频 Base64 编码、素材 ID。
* 音频 URL填入音频的公网 URL。
* Base64 编码:将本地文件转换为 Base64 编码字符串,然后提交给大模型。遵循格式:`data:audio/<音频格式>;base64,<Base64编码>`,注意 `<音频格式>` 需小写,如 `data:audio/wav;base64,{base64_audio}`
* 素材 ID用于视频生成的虚拟人的音频素材 ID遵循格式asset://<ASSET_ID\>可从[素材&虚拟人像库](https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seedance-2-0-260128)获取
:::tip 传入单个音频要求
* 格式wav、mp3
* 时长:单个音频时长 [2, 15] s最多传入 3 段参考音频,所有音频总时长不超过 15 s。
* 大小:单个音频不超过 15 MB请求体大小不超过 64 MB。大文件请勿使用Base64编码。
:::
---
content.**role ** `string` `条件必填`
音频的位置或用途。当前仅支持 reference_audio参考音频。
---
**样片信息 ** `object`
基于样片任务 ID生成正式视频。仅 seedance 1.5 pro 支持该功能。[阅读](https://www.volcengine.com/docs/82379/1366799?lang=zh#5acd28c8)[文档](https://www.volcengine.com/docs/82379/1366799?lang=zh#5acd28c8) 获取 draft 功能的使用教程和注意事项。
属性
---
content.**type ** `string` %%require%%
输入内容的类型,此处应为 `draft_task`
---
content.**draft_task** ** ** `object` %%require%%
输入给模型的样片任务。
属性
---
content.draft_task.**id ** `string` %%require%%
样片任务 ID。平台将自动复用 Draft 视频使用的用户输入(**model、** content.**text、** content.**image_url、generate_audio、seed、ratio、duration、camera_fixed ** ),生成正式视频。其余参数支持指定,不指定将使用本模型的默认值。
使用分为两步Step1: 调用本接口生成 Draft 视频。Step2: 如果确认 Draft 视频符合预期,可基于 Step1 返回的 Draft 视频任务 ID调用本接口生成最终视频。[阅读文档](https://www.volcengine.com/docs/82379/1366799?lang=zh#5acd28c8) 获取详细教程。
---
**callback_url** `string`
填写本次生成任务结果的回调通知地址。当视频生成任务有状态变化时,方舟将向此地址推送 POST 请求。
回调请求内容结构与[查询任务API](https://www.volcengine.com/docs/82379/1521309)的返回体一致。
回调返回的 status 包括以下状态:
* queued排队中。
* running任务运行中。
* succeeded 任务成功。如发送失败即5秒内没有接收到成功发送的信息回调三次
* failed任务失败。如发送失败即5秒内没有接收到成功发送的信息回调三次
* expired任务超时即任务处于**运行中或排队中**状态超过过期时间。可通过 **execution_expires_after ** 字段设置过期时间。
---
**return_last_frame** `boolean` `默认值 false`
* true返回生成视频的尾帧图像。设置为 `true` 后,可通过 [查询视频生成任务接口](https://www.volcengine.com/docs/82379/1521309) 获取视频的尾帧图像。尾帧图像的格式为 png宽高像素值与生成的视频保持一致无水印。
使用该参数可实现生成多个连续视频:以上一个生成视频的尾帧作为下一个视频任务的首帧,快速生成多个连续视频,调用示例详见 [教程](https://www.volcengine.com/docs/82379/1366799?lang=zh#141cf7fa)。
* false不返回生成视频的尾帧图像。
---
**service_tier** `string` `默认值 default`
> 不支持修改已提交任务的服务等级
> seedance 2.0 & 2.0 fast 不支持离线推理
指定处理本次请求的服务等级类型,枚举值:
* default在线推理模式RPM 和并发数配额较低(详见 [模型列表](https://www.volcengine.com/docs/82379/1330310?lang=zh#2705b333)),适合对推理时效性要求较高的场景。
* flex离线推理模式TPD 配额更高(详见 [模型列表](https://www.volcengine.com/docs/82379/1330310?lang=zh#2705b333)),价格为在线推理的 50% 适合对推理时延要求不高的场景。
---
**execution_expires_after ** `integer` `默认值 172800`
任务超时阈值。指定任务提交后的过期时间(单位:秒),从 **created at** 时间戳开始计算。默认值 172800 秒,即 48 小时。取值范围:[3600259200]。
不论使用哪种 **service_tier**,都建议根据业务场景设置合适的超时时间。超过该时间后任务会被自动终止,并标记为`expired`状态。
---
**generate_audio ** `boolean` `默认值 true`
> 仅 seedance 2.0 & 2.0 fast、seedance 1.5 pro 支持
控制生成的视频是否包含与画面同步的声音。
* true模型输出的视频包含同步音频。模型会基于文本提示词与视觉内容自动生成与之匹配的人声、音效及背景音乐。建议将对话部分置于双引号内以优化音频生成效果。例如男人叫住女人说“你记住以后不可以用手指指月亮。”
* false模型输出的视频为无声视频。
:::warning
生成的有声视频均为单声道,和传入的音频声道数无关。
:::
---
**draft ** `boolean` `默认值 false`
> 仅 seedance 1.5 pro 支持
控制是否开启样片模式。[阅读文档](https://www.volcengine.com/docs/82379/1366799?lang=zh#5acd28c8) 获取使用教程和注意事项。
* true开启样片模式生成一段预览视频快速验证场景结构、镜头调度、主体动作与 prompt 意图是否符合预期。消耗 token 数较正常视频更少,使用成本更低。
* false关闭样片模式正常生成一段视频。
:::tip
开启样片模式后,将使用 480p 分辨率生成 Draft 视频(使用其他分辨率会报错),不支持返回尾帧功能,不支持离线推理功能。
:::
---
**tools==^new^==** ** ** `object[]`
> 仅 seedance 2.0 & 2.0 fast 支持
配置模型要调用的工具。
属性
tools.**type ** `string`
指定使用的工具类型。
* web_search联网搜索工具。[阅读教程](https://www.volcengine.com/docs/82379/1366799?lang=zh#c40ed3ef) 获取详细代码示例。
:::tip
* 开启联网搜索后,模型会根据用户的提示词自主判断是否搜索互联网内容(如商品、天气等)。可提升生成视频的时效性,但也会增加一定的时延。
* 实际搜索次数可通过 [查询视频生成任务 API](https://www.volcengine.com/docs/82379/1521309?lang=zh) 返回的 usage.tool_usage.**web_search** 字段获取,如果为 0 表示未搜索。
:::
---
**safety_identifier==^new^==** `string`
终端用户的唯一标识符,用于协助平台检测您的应用中可能违反火山方舟使用政策的用户。该标识符为英文字符串,需保证对单个用户固定且唯一,长度不超过 64 个字符。推荐传入对用户名、用户 ID 或邮箱进行哈希处理后生成的字符串,避免泄露用户隐私信息。
---
&nbsp;
:::warning 部分参数升级说明
* **对于 resolution、ratio、duration、frames、seed、camera_fixed、watermark 参数,平台升级了参数传入方式,示例如下。所有模型依然兼容支持旧方式。**
* 不同模型,可能对应支持不同的参数与取值,详见 [输出视频格式](https://www.volcengine.com/docs/82379/1366799?lang=zh#9fe4cce0)。当输入的参数或取值不符合所选的模型时,该参数将被忽略或触发报错:
* 新方式:在 request body 中直接传入参数。此方式为**强校验,** 若参数填写错误,模型会返回错误提示。
* 旧方式:在文本提示词后追加 \-\-[parameters]。此方式为**弱校验,** 若参数填写错误,该参数将被忽略或触发报错。
:::
**新方式(推荐):在 request body 中直接传入参数**
```JSON
...
// Specify the aspect ratio of the generated video as 16:9, duration as 5 seconds, resolution as 720p, seed as 11, and include a watermark. The camera is not fixed.
"model": "doubao-seedance-1-5-pro-251215",
"content": [
{
"type": "text",
"text": "小猫对着镜头打哈欠"
}
],
// All parameters must be written in full; abbreviations are not supported
"resolution": "720p",
"ratio":"16:9",
"duration": 5,
// "frames": 29, Either duration or frames is required
"seed": 11,
"camera_fixed": false,
"watermark": true
...
```
**旧方式:在文本提示词后追加 \-\-[parameters]**
```JSON
...
// Specify the aspect ratio of the generated video as 16:9, duration as 5 seconds, resolution as 720p, seed as 11, and include a watermark. The camera is not fixed.
"model": "doubao-seedance-1-5-pro-251215",
"content": [
{
"type": "text",
"text": "小猫对着镜头打哈欠 --rs 720p --rt 16:9 --dur 5 --seed 11 --cf false --wm true"
// "text": "小猫对着镜头打哈欠 --resolution 720p --ratio 16:9 --duration 5 --seed 11 --camerafixed false --watermark true"
}
]
...
```
---
**resolution ** `string`
> seedance 2.0 & 2.0 fast、seedance 1.5 pro、seedance 1.0 lite 默认值:`720p`
> seedance 1.0 pro & pro\-fast 默认值:`1080p`
视频分辨率,枚举值:
* 480p
* 720p
* 1080pseedance 1.0 lite 参考图场景、seedance 2.0 & 2.0 fast 不支持
---
**ratio ** `string`
> seedance 2.0 & 2.0 fast、seedance 1.5 pro 默认值为 `adaptive`
> seedance 1.0 lite 参考图场景默认值为 `16:9`
> 其他模型:文生视频默认值 `16:9`,图生视频默认值 `adaptive`
生成视频的宽高比例。不同宽高比对应的宽高像素值见下方表格。
* 16:9
* 4:3
* 1:1
* 3:4
* 9:16
* 21:9
* adaptive根据输入自动选择最合适的宽高比详见下文说明
:::warning **adaptive ** 适配规则
当配置 **ratio**`adaptive` 时,模型会根据生成场景自动适配宽高比;实际生成的视频宽高比可通过 [查询视频生成任务 API](https://www.volcengine.com/docs/82379/1521309?lang=zh) 返回的 **ratio** 字段获取。
**支持模型:**
* seedance 2.0 & 2.0 fast、seedance 1.5 Pro 支持
* 其他模型仅图生视频场景支持,注意 seedance 1.0 lite 参考图场景不支持。
**取值规则:**
* 文生视频:根据输入的提示词,智能选择最合适的宽高比。
* 首帧 / 首尾帧生视频:根据上传的首帧图片比例,自动选择最接近的宽高比。
* 多模态参考生视频:根据用户提示词意图判断,如果是首帧生视频/编辑视频/延长视频,以该图片/视频为准选择最接近的宽高比;否则,以传入的第一个媒体文件为准(优先级:视频>图片)选择最接近的宽高比。
:::
&nbsp;
不同宽高比对应的宽高像素值
Note图生视频选择的宽高比与您上传的图片宽高比不一致时方舟会对您的图片进行裁剪裁剪时会居中裁剪详细规则见 [图片裁剪规则](https://www.volcengine.com/docs/82379/1366799?lang=zh#f76aafc8)。
|分辨率 |宽高比|宽高像素值|宽高像素值|\
| | |seedance 1.0 系列 |seedance 1.5 pro|\
| | | |seedance 2.0 & 2.0 fast |
|---|---|---|---|
|480p |16:9 |864×480 |864×496 |
|^^|4:3 |736×544 |752×560 |
|^^|1:1 |640×640 |640×640 |
|^^|3:4 |544×736 |560×752 |
|^^|9:16 |480×864 |496×864 |
|^^|21:9 |960×416 |992×432 |
|720p |16:9 |1248×704 |1280×720 |
|^^|4:3 |1120×832 |1112×834 |
|^^|1:1 |960×960 |960×960 |
|^^|3:4 |832×1120 |834×1112 |
|^^|9:16 |704×1248 |720×1280 |
|^^|21:9 |1504×640 |1470×630 |
|1080p |16:9 |1920×1088 |1920×1080 |\
|> 1.0 lite 参考图场景不支持seedance 2.0 & 2.0 fast不支持 | | | |
|^^|4:3 |1664×1248 |1664×1248 |
|^^|1:1 |1440×1440 |1440×1440 |
|^^|3:4 |1248×1664 |1248×1664 |
|^^|9:16 |1088×1920 |1080×1920 |
|^^|21:9 |2176×928 |2206×946 |
---
**duration** `integer` `默认值 5`
> duration 和 frames 二选一即可frames 的优先级高于 duration。如果您希望生成整数秒的视频建议指定 duration。
生成视频时长,仅支持整数,单位:秒。
* seedance 1.0 pro、seedance 1.0 pro fast、seedance 1.0 lite: [2, 12] s。
* seedance 1.5 pro: [4,12] 或设置为`-1`
* seedance 2.0 & 2.0 fast: [4,15] 或设置为`-1`
:::warning
seedance 2.0 & 2.0 fast、seedance 1.5 pro 支持两种配置方法
* 指定具体时长:支持有效范围内的任一整数。
* 智能指定:设置为 `-1`,表示由模型在有效范围内自主选择合适的视频长度(整数秒)。实际生成视频的时长可通过 [查询视频生成任务 API](https://www.volcengine.com/docs/82379/1521309?lang=zh) 返回的 **duration** 字段获取。注意视频时长与计费相关,请谨慎设置。
:::
---
**frames** `integer`
> seedance 2.0 & 2.0 fast、seedance 1.5 pro 暂不支持
> duration 和 frames 二选一即可frames 的优先级高于 duration。如果您希望生成小数秒的视频建议指定 frames。
生成视频的帧数。通过指定帧数,可以灵活控制生成视频的长度,生成小数秒的视频。
由于 frames 的取值限制,仅能支持有限小数秒,您需要根据公式推算最接近的帧数。
* 计算公式:帧数 = 时长 × 帧率24
* 取值范围:支持 [29, 289] 区间内所有满足 `25 + 4n` 格式的整数值,其中 n 为正整数。
例如:假设需要生成 2.4 秒的视频,帧数=2.4×24=57.6。由于 frames 不支持 57.6,此时您只能选择一个最接近的值。根据 25+4n 计算出最接近的帧数为 57实际生成的视频为 57/24=2.375 秒。
---
**seed** `integer` `默认值 -1`
种子整数,用于控制生成内容的随机性。
取值范围:[\-1, 2^32\-1]之间的整数。
:::warning
* 相同的请求下模型收到不同的seed值不指定seed值或令seed取值为\-1会使用随机数替代、或手动变更seed值将生成不同的结果。
* 相同的请求下模型收到相同的seed值会生成类似的结果但不保证完全一致。
:::
---
**camera_fixed** `boolean` `默认值 false`
> 参考图场景不支持seedance 2.0 & 2.0 fast 暂不支持
是否固定摄像头。枚举值:
* true固定摄像头。平台会在用户提示词中追加固定摄像头实际效果不保证。
* false不固定摄像头。
---
**watermark** `boolean` `默认值 false`
生成视频是否包含水印。枚举值:
* false不含水印。
* true含有水印。
---
<span id="oCS1tULg"></span>
## 响应参数
> 跳转 [请求参数](#RxN8G2nH)
**id ** `string`
视频生成任务 ID 。仅保存 7 天(从 **created at** 时间戳开始计算),超时后将自动清除。
* 设置`"draft": true`,为 Draft 视频任务 ID。
* 设置 `"draft": false`,为正常视频任务 ID。
创建视频生成任务为异步接口,获取 ID 后,需要通过 [查询视频生成任务 API](https://www.volcengine.com/docs/82379/1521309) 来查询视频生成任务的状态。任务成功后,会输出生成视频的`video_url`

View File

@ -4,6 +4,71 @@
--- ---
## 2026-04-17 — v0.18.3: 版权报错友好提示 + 图片删除即梦式连续重命名
**状态**: ✅ 已完成 | **验收**: 14 个自动化测试全过11 单元 + 3 E2E
### 变更内容
#### 后端
1. **版权限制错误友好提示**`OutputVideoSensitiveContentDetected.PolicyViolation`(漫威等知名 IP 触发的版权拦截)加中文错误码映射:"生成的视频涉及版权限制内容如知名IP、名人肖像等已被系统拦截请修改提示词后重试"。精确匹配 API 返回的 code不影响父级 `OutputVideoSensitiveContentDetected`(敏感内容)的现有提示
#### 前端
2. **图片删除即梦式连续重命名** — 现有逻辑删图片2 后图片3 保持原名称,再上传新图会出现"两个图片2"。修复后:
- `inputBar.ts::removeReference` 删除后,同类型(图片/视频/音频剩余引用按顺序连续重命名图片1/图片2/图片3 连续无空位)
- 用 `DOMParser` 解析 editorHtml找到对应 `data-ref-id`@mention span更新 textContent提示词栏里的 `@图片3` 自动变 `@图片2`
- 缩略图区和提示词栏视觉同步刷新
#### 测试覆盖
- **Vitest 单元测试 11 个**test/unit/removeReferenceRelabeling.test.ts图片三场景、视频/音频独立编号、空 editorHtml、无 @mention、传入无效 id、部分 @、连续快速删除等边界
- **Playwright E2E 3 个**test/e2e/bug2-rename.spec.ts真实浏览器验证上传 3 张图 → 删中间 → 再上传,编号不冲突
#### 文档整理
3. **AirDrama 根目录归档**8 个过期 MD 文档移至 `archive/`_settle_payment 双重结算/v0.15.1 部署/公告HTML/迁移到火山/全平台账务审计/火山耗时分析/图片上传blob/迭代需求_20260320
4. **video-shuoshan/docs 归档**6 个过期文档移至 `docs/archive/`celery 轮询修复/design-review/PRD/test-report/两版旧 API 文档)
5. **新增 1080P Plan**`AirDrama/1080P分辨率支持开发计划.md`,对照官方 API 文档完成 3 轮审查修正21:9 像素值错误、_settle_payment 遗漏、VideoDetailModal 重新编辑、regenerate、API 响应 6 处、serializer 命名、GenerationRecord.resolution 字段已存在等),标注 5 项已知计费缺陷
### 变更文件
- `backend/utils/airdrama_client.py` — ERROR_MESSAGES 加 PolicyViolation 映射
- `web/src/store/inputBar.ts` — removeReference 重写(即梦逻辑 + editorHtml 同步)
- `web/test/unit/removeReferenceRelabeling.test.ts` — 11 个单元测试(新增)
- `web/test/e2e/bug2-rename.spec.ts` — 3 个 E2E 测试(新增)
- `AirDrama/1080P分辨率支持开发计划.md` — 1080P 开发 Plan新增
- `AirDrama/版本管理.md` — 添加 v0.18.3 记录
- `AirDrama/项目总览与待办.md` — 完成项 + 1080P P0 待办
- 16 个 MD 文档归档到两个 archive 目录
### 触发原因
- 用户反馈:漫威素材生成失败时显示英文错误,不友好
- 用户反馈:删除中间图片后再上传会出现重复编号(参考即梦交互)
- 火山 2026-04-16 1080P 上线,需提前规划开发
---
## 2026-04-13 — v0.18.2: 资产页修复 + 重新编辑素材泄漏 + 音频校验
**状态**: ✅ 已完成 | **验收**: 待测试
### 变更内容
#### 前端
1. **资产页素材库引用不可查看** — Admin/Team 资产页的 `assetVideoToTask` 直接用了 `asset://` 协议 URL 作为 `previewUrl`,浏览器无法加载。改为检测 `asset://` 后使用 `thumb_url`(真实 TOS 缩略图地址),并标记 `isAssetRef`。同步修复 `BackendTask``AssetVideo` 类型定义补 `thumb_url` 字段
2. **重新编辑素材泄漏**`reEdit()` 把素材库引用混入 `references` 数组(注释写已过滤但实际没有),用户删除 @标签后旧素材仍通过 `filesToUpload` 路径发出。修复:`reEdit/regenerate``.filter(!isAssetRef)``PromptInput.extractText` 每次 DOM 变化时实时同步 `assetMentions` store
3. **音频不能作为唯一参考素材** — Seedance API 不支持"纯音频"和"文本+音频"。`canSubmit()` 去掉 `!hasText` 条件,同时检查 `references``assetMentions` 中的图片/视频Toolbar 点击禁用按钮弹 toast 提示原因
4. **素材库引用缩略图烂图**`pollStatus` 跨项目素材保护
5. **音频 ♫ 符号溢出** — 改用 CSS `::before` 渲染,不再污染 prompt 文本
### 变更文件
- `web/src/pages/AdminAssetsPage.tsx` — isAssetUrl + thumb_url 处理
- `web/src/pages/TeamAssetsPage.tsx` — 同上
- `web/src/types/index.ts` — BackendTask/AssetVideo 补 thumb_url
- `web/src/store/generation.ts` — reEdit/regenerate 过滤 isAssetRef
- `web/src/components/PromptInput.tsx` — extractText 同步 assetMentions
- `web/src/store/inputBar.ts` — canSubmit 音频校验增强
- `web/src/components/Toolbar.tsx` — 音频受限 toast 提示
---
## 2026-03-19 — v0.9.7: 登录风控第二期 — IP归属地解析 + 异常检测 + 飞书告警 + 自动封禁 ## 2026-03-19 — v0.9.7: 登录风控第二期 — IP归属地解析 + 异常检测 + 飞书告警 + 自动封禁
**状态**: ✅ 已完成 | **验收**: ✅ 通过本地验证IP138 在线 API 需部署至阿里云后验证) **状态**: ✅ 已完成 | **验收**: ✅ 通过本地验证IP138 在线 API 需部署至阿里云后验证)

View File

@ -0,0 +1,430 @@
# HTTPS 跳转 & 证书生成流程分析
> **本文档面向 AI Agent / 开发者**:总结了在 K3s + Traefik v3 + cert-manager 架构下,实现 HTTP→HTTPS 自动跳转和 Let's Encrypt 自动证书的完整方案。其他项目可直接参照本文档修改自己的 CI/CD 流水线和 K8s 配置。
---
## 零、其他项目接入指南(快速参考)
### 需要做的 3 件事
#### 1. 新增文件:`k8s/cert-manager-issuer.yaml`
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: airlabsv001@gmail.com
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: traefik
```
> 如果集群已有同名 ClusterIssuer多项目共享同一集群这一步可跳过`kubectl apply` 是幂等的。
#### 2. 新增文件:`k8s/redirect-https-middleware.yaml`
```yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-https
spec:
redirectScheme:
scheme: https
permanent: true
```
#### 3. 修改 `k8s/ingress.yaml`
确保包含以下 3 个 annotation 和 TLS 配置:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: 你的-ingress-名称
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt-prod" # ← 触发自动证书签发
traefik.ingress.kubernetes.io/router.middlewares: "default-redirect-https@kubernetescrd" # ← HTTP→HTTPS 跳转
spec:
tls:
- hosts:
- 你的域名-api.example.com # ← 改成你的域名
- 你的域名.example.com
secretName: 你的项目-tls # ← 证书存储的 Secret 名,随便起,不要和其他项目冲突
rules:
- host: 你的域名-api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 你的-backend-service
port:
number: 8000
- host: 你的域名.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 你的-web-service
port:
number: 80
```
#### 4. 修改 CI/CD 流水线deploy.yaml
`kubectl apply` 部署步骤中,**在 ingress.yaml 之前**加上这两行:
```yaml
# 原来只有这些:
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/web-deployment.yaml
kubectl apply -f k8s/ingress.yaml
# 改成:
kubectl apply -f k8s/cert-manager-issuer.yaml # ← 新增:注册 Let's Encrypt CA
kubectl apply -f k8s/redirect-https-middleware.yaml # ← 新增HTTP→HTTPS 重定向中间件
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/web-deployment.yaml
kubectl apply -f k8s/ingress.yaml
```
> **顺序很重要**cert-manager-issuer 和 middleware 必须在 ingress 之前 apply否则 ingress 引用的资源不存在会导致证书签发失败或重定向不生效。
### 集群前置条件(每台服务器只需执行一次)
以下命令需要 **SSH 到每台 K8s master 节点手动执行一次**,不需要写进 CI/CD
```bash
# 1. 确认 cert-manager 已安装
kubectl get pods -n cert-manager
# 如果没有需要先安装https://cert-manager.io/docs/installation/
# 2. 配置 Traefik 全局 HTTP→HTTPS 重定向
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
> **关键**`to=:443` 不能写成 `to=websecure`。Traefik 内部 websecure 端口是 8443`websecure` 会导致重定向 URL 带 `:8443`,用户无法访问。
### 验证清单
```bash
# HTTP 跳转
curl -I http://你的域名
# 预期: 308 Permanent Redirect → https://你的域名
# 证书有效
curl -v https://你的域名 2>&1 | grep "issuer"
# 预期: issuer: ... Let's Encrypt ...
# 证书状态
kubectl get certificate -A
# 预期: Ready = True
```
---
## 一、HTTP → HTTPS 自动跳转
### 问题
用户通过 `http://` 访问时不会自动跳转到 `https://`
### 根因
Traefik v3K3s 内置 Ingress Controller对配置了 TLS 的 Ingress 默认只创建 HTTPS 路由HTTP 请求没有对应路由处理,导致无法重定向。
### 修复方案
在 Traefik Deployment 全局添加 HTTP→HTTPS 重定向参数(无需每个 Ingress 单独配置,集群内所有项目自动生效):
```
--entryPoints.web.http.redirections.entryPoint.to=:443
--entryPoints.web.http.redirections.entryPoint.scheme=https
--entryPoints.web.http.redirections.entryPoint.permanent=true
```
**执行命令**(在 K8s master 节点):
```bash
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
> **注意**: `to=:443` 而不是 `to=websecure`。Traefik 内部 websecure 监听在 8443 端口,如果写 `to=websecure` 重定向 URL 会带上 `:8443` 端口号,导致用户访问失败。写 `:443` 可以确保重定向目标是标准 HTTPS 端口。
### 测试服状态
已修复 ✅ — `http://airflow-studio.test.airlabs.art` → 308 → `https://airflow-studio.test.airlabs.art`
### 正式服状态
未修复 ❌ — 需要在正式服 K8s 集群执行同样的 `kubectl patch` 命令。
---
## 二、SSL 证书生成流程
### 整体架构
```
用户浏览器
┌────▼────┐
│ DNS │ *.airlabs.art → 集群外网 IP
└────┬────┘
┌─────────▼──────────┐
│ Traefik (K3s) │ Ingress Controller
│ Port 80 / 443 │
└─────────┬──────────┘
┌───────────▼────────────┐
│ Ingress 资源 │ 定义域名 → Service 映射
│ + TLS secretName │ 指定证书存储位置
│ + cert-manager注解 │ 触发自动证书签发
└───────────┬────────────┘
┌───────────▼────────────┐
│ cert-manager │ 监听 Ingress 变化
│ (集群内 Pod) │ 自动管理证书生命周期
└───────────┬────────────┘
┌───────────▼────────────┐
│ Let's Encrypt │ 免费证书颁发机构 (CA)
│ (外部服务) │ 通过 ACME 协议验证域名
└────────────────────────┘
```
### 详细步骤
#### 第 1 步ClusterIssuer 定义 CA 配置
文件: `k8s/cert-manager-issuer.yaml`
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory # Let's Encrypt 生产 API
email: airlabsv001@gmail.com # 证书到期提醒邮箱
privateKeySecretRef:
name: letsencrypt-prod-key # ACME 账号私钥存储
solvers:
- http01:
ingress:
class: traefik # 使用 Traefik 完成验证
```
- `ClusterIssuer` 是全局资源,集群内所有 namespace 都可使用
- ACME 账号注册后私钥保存在 `letsencrypt-prod-key` Secret 中
#### 第 2 步Ingress 触发证书签发
文件: `k8s/ingress.yaml`
```yaml
metadata:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod" # ← 告诉 cert-manager 用哪个 Issuer
spec:
tls:
- hosts:
- airflow-studio-api.airlabs.art # ← 需要证书的域名
- airflow-studio.airlabs.art
secretName: airflow-studio-tls # ← 证书存到这个 Secret
```
当 cert-manager 检测到这个 Ingress 有 `cert-manager.io/cluster-issuer` 注解,会自动:
1. 创建一个 `Certificate` 资源
2. 创建一个 `CertificateRequest` 资源
3. 创建一个 `Order` 资源
4. 创建一个 `Challenge` 资源(每个域名一个)
#### 第 3 步HTTP-01 验证(关键环节)
cert-manager 使用 **HTTP-01 验证**来证明你拥有该域名:
```
Let's Encrypt 服务器 你的集群
│ │
│ 1. 给你一个 token │
│ ──────────────────────────────────────────► │
│ │
│ 2. 在 http://<域名>/.well-known/ │
│ acme-challenge/<token> 放置响应 │
│ │ cert-manager 自动创建
│ 3. Let's Encrypt 访问该 URL 验证 │ 临时 Ingress 路由
│ ──────────────────────────────────────────► │ 处理这个路径
│ │
│ 4. 验证通过,签发证书 │
│ ◄────────────────────────────────────────── │
```
**验证成功的前提条件**
| 条件 | 说明 |
|------|------|
| DNS 解析正确 | 域名必须指向集群的外网 IP |
| 80 端口开放 | Let's Encrypt 只通过 HTTP 80 端口验证 |
| Traefik 正常运行 | 需要处理 `/.well-known/acme-challenge/` 请求 |
| cert-manager 已安装 | 集群内必须有 cert-manager Pod 在运行 |
| 无防火墙拦截 | 安全组/防火墙不能阻断 Let's Encrypt 到 80 端口的访问 |
#### 第 4 步:证书存储与使用
验证通过后:
- cert-manager 将证书和私钥存入 Secret `airflow-studio-tls`
- `tls.crt` — 证书链(服务器证书 + 中间证书)
- `tls.key` — 私钥
- Traefik 自动读取该 Secret用于 HTTPS 握手
#### 第 5 步:自动续期
- Let's Encrypt 证书有效期 **90 天**
- cert-manager 在到期前 **30 天**自动续期(`renewalTime`
- 续期过程与首次签发相同HTTP-01 验证)
---
## 三、正式服 HTTPS "不安全" 排查
### 当前正式服证书状态(从外部检测)
```
Subject: CN=airflow-studio-api.airlabs.art
Issuer: C=US, O=Let's Encrypt, CN=R13
Valid: 2026-04-04 ~ 2026-07-03
SAN: airflow-studio-api.airlabs.art, airflow-studio.airlabs.art
Chain: 完整 (R13 → ISRG Root X1)
Verify: return:1 (通过)
```
**证书本身是有效的。** 从 openssl 命令行验证完全通过。
### 浏览器提示"不安全"的可能原因
#### 原因 1正式服 HTTP 80 端口未跳转 HTTPS最可能
```bash
# 测试结果
curl http://airflow-studio.airlabs.art/login → HTTP 200直接返回页面没有跳转
```
正式服 80 端口直接返回了页面内容(通过 nginx浏览器地址栏显示 `http://` 时会标记为"不安全"。这不是证书问题,而是**用户没有被引导到 HTTPS**。
**解决**: 在正式服集群执行同样的 Traefik redirect patch 命令(见第一节)。
#### 原因 2HSTS 头未设置
即使有了跳转,首次访问仍走 HTTP。添加 HSTS 头可以让浏览器记住始终用 HTTPS
`web/nginx.conf` 中添加(仅在 Traefik 终结 TLS 的情况下由后端设置无效,需在 Ingress 层设置):
```yaml
# ingress.yaml annotation
traefik.ingress.kubernetes.io/router.middlewares: "default-hsts@kubernetescrd"
```
或创建 HSTS Middleware
```yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: hsts
spec:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
stsPreload: true
```
#### 原因 3混合内容 (Mixed Content)
页面通过 HTTPS 加载但其中某些资源图片、API、JS通过 HTTP 加载。
- 前端源码已检查:**无 `http://` 硬编码** ✅
- 可能来源:数据库中存储的视频/图片 URL 是 `http://` 开头
- 排查:在浏览器 F12 → Console 查看是否有 "Mixed Content" 警告
#### 原因 4cert-manager 未部署到正式服集群
正式服和测试服是**不同的 K8s 集群**。需要确认正式服集群也安装了 cert-manager
```bash
kubectl get pods -n cert-manager
```
如果没有安装证书不会自动签发Traefik 会使用自签证书(浏览器会报不安全)。
---
## 四、测试服 vs 正式服对比排查表
| 检查项 | 测试服 | 正式服 | 检查命令 |
|--------|--------|--------|----------|
| cert-manager 运行 | ✅ | ❓ 待确认 | `kubectl get pods -n cert-manager` |
| ClusterIssuer 存在 | ✅ | ❓ 待确认 | `kubectl get clusterissuer` |
| Certificate Ready | ✅ Ready | ❓ 待确认 | `kubectl get certificate -A` |
| TLS Secret 存在 | ✅ | ❓ 待确认 | `kubectl get secret airflow-studio-tls` |
| 证书链完整 | ✅ Let's Encrypt | ✅ Let's Encrypt | `openssl s_client -connect <domain>:443` |
| HTTP→HTTPS 跳转 | ✅ 308 | ❌ 返回 200 | `curl -I http://<domain>` |
| Traefik redirect 配置 | ✅ 已配置 | ❌ 未配置 | `kubectl get deploy traefik -n kube-system -o yaml` |
| 80 端口外网可达 | ✅ | ✅ | `curl http://<domain>` |
| 443 端口外网可达 | ✅ | ✅ | `curl -k https://<domain>` |
| 前端混合内容 | ✅ 无 | ❓ 待确认 | 浏览器 F12 Console |
---
## 五、正式服修复操作清单
### 步骤 1SSH 到正式服 K8s master 节点
### 步骤 2检查 cert-manager
```bash
kubectl get pods -n cert-manager
kubectl get clusterissuer
kubectl get certificate -A
kubectl describe certificate airflow-studio-tls
```
### 步骤 3如果证书状态异常删除重签
```bash
kubectl delete secret airflow-studio-tls
# cert-manager 会自动重新签发(需要 1-3 分钟)
kubectl get certificate -A -w # 等待 Ready=True
```
### 步骤 4配置 HTTP→HTTPS 全局跳转
```bash
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
### 步骤 5验证
```bash
# HTTP 跳转
curl -I http://airflow-studio.airlabs.art/login
# 预期: 308 → https://airflow-studio.airlabs.art/login
# HTTPS 证书
curl -v https://airflow-studio.airlabs.art/login 2>&1 | grep -E "SSL|subject|issuer"
```

View File

@ -0,0 +1,65 @@
# 提示词 AI 优化功能
**状态**:待开发
**创建日期**2026-04-17
## 需求背景
用户写提示词时,经常写得过于简单或不符合 Seedance 2.0 的提示词规范如没用「图片n」引用素材、缺少核心要素、镜头语言模糊等导致生成效果不理想。
引入火山官方的 SKILL.mdSeedance 2.0 Prompt Optimizer能力让用户在写完提示词后一键优化。
## 功能设计
### 用户视角
1. 用户在提示词输入框输入原始提示词(带 @素材引用
2. 点击输入框旁的「AI 优化」按钮
3. 弹出预览弹窗,显示优化后的提示词
4. 用户点「采纳」→ 替换原提示词;点「取消」→ 保留原文
5. 消耗一定 token 数(计入团队 token 池)
### 技术方案
**后端**
- 新接口:`POST /api/v1/prompt/optimize`
- 入参:`prompt`(原始提示词,含 `@素材` 标记)、`asset_refs`素材引用列表label + type + url
- 调用豆包模型(推荐 `doubao-seed-2.0` 最新版本,具体 model id 需确认)
- System prompt基于 SKILL.md 改造成**一次性输出**模式(不做多轮交互)
- 返回:`optimized_prompt`(优化后的文本)+ `token_used`(消耗 token 数)
- 同时扣减团队 token 池
**前端**
- `PromptInput` 组件右侧加「AI 优化」按钮(带 ✨ 图标)
- 点击后loading 状态 → 调用后端接口 → 弹出 `PromptOptimizeModal` 预览弹窗
- 弹窗显示:原始 vs 优化对比、token 消耗提示、采纳/取消按钮
- 采纳后:把优化结果写回 editor保持 @mention 标签正确渲染)
**SKILL.md 改造要点**
- 去掉 Step 0主动引导提问→ 一次输入一次输出
- 去掉 Step 3 的「多选模板交互」→ 如遇歧义/冲突,在输出里以备注形式标注(如 `【注:检测到 X 冲突,已按 Y 处理】`
- 保留 Step 2素材自动映射 `@图N`、Step 4结构化输出优化后提示词 / 优化问题 / 相关原则)
## 计费设计
- 提示词优化和视频生成共用**同一个 token 池**(用户已熟悉的计费机制)
- 不单独限额,按实际 token 消耗扣减
- 前端展示:"本次优化消耗约 X token"
## 模型选择
- **首选**:豆包 2.0 系列最强模型(需查火山文档确认最新 model id
- 备选:`doubao-1-5-pro-32k`(成本更低,任务够用)
## 待确认事项
- [ ] 豆包 2.0 系列当前最强模型的具体 model id
- [ ] Token 池扣减逻辑是否需要团队/个人双重配额
- [ ] 优化失败时LLM 报错、token 超限)的前端兜底提示
## 验收标准
1. 用户输入粗糙提示词(如「美女跳舞」)→ 优化后符合 SKILL.md 的三段式结构(全局设定 / 时间线脚本 / 质感风格与约束)
2. 带 `@素材` 的提示词 → 优化后正确使用 `@图1/@图2/@视频1` 等标记
3. 冲突/缺失场景 → 在输出中以备注标明,不擅自填充
4. Token 消耗正确扣减到团队池
5. 用户可在弹窗中选择采纳或取消
## 参考文件
- SKILL.md火山提供的原始技能文件
- `docs/API文档/seedance 2.0 系列教程.MD` 第 2152 行起的「提示词技巧」部分

View File

@ -5,6 +5,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: "traefik" kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt-prod" cert-manager.io/cluster-issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/router.middlewares: "default-redirect-https@kubernetescrd"
spec: spec:
tls: tls:
- hosts: - hosts:

View File

@ -0,0 +1,8 @@
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-https
spec:
redirectScheme:
scheme: https
permanent: true

View File

@ -5,6 +5,7 @@ interface DropdownItem {
label: string; label: string;
value: string; value: string;
icon?: ReactNode; icon?: ReactNode;
disabled?: boolean;
} }
interface DropdownProps { interface DropdownProps {
@ -41,8 +42,10 @@ export function Dropdown({ items, value, onSelect, trigger, minWidth = 150 }: Dr
{items.map((item) => ( {items.map((item) => (
<div <div
key={item.value} key={item.value}
className={`${styles.item} ${value === item.value ? styles.selected : ''}`} className={`${styles.item} ${value === item.value ? styles.selected : ''} ${item.disabled ? styles.disabled : ''}`}
style={item.disabled ? { opacity: 0.4, cursor: 'not-allowed' } : undefined}
onClick={() => { onClick={() => {
if (item.disabled) return;
onSelect(item.value); onSelect(item.value);
setOpen(false); setOpen(false);
}} }}

View File

@ -39,9 +39,12 @@ const DownloadIcon = () => (
// Mention tag with thumbnail + hover preview // Mention tag with thumbnail + hover preview
function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?: string; assetType?: string }) { function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?: string; assetType?: string }) {
const [hover, setHover] = useState(false); const [hover, setHover] = useState(false);
const [thumbBroken, setThumbBroken] = useState(false);
const ref = useRef<HTMLSpanElement>(null); const ref = useRef<HTMLSpanElement>(null);
const [pos, setPos] = useState({ top: 0, left: 0 }); const [pos, setPos] = useState({ top: 0, left: 0 });
const isAudio = assetType === 'Audio' || assetType === 'audio'; const isAudio = assetType === 'Audio' || assetType === 'audio';
const isVideo = assetType === 'Video' || assetType === 'video';
const showThumb = thumbUrl && !thumbBroken;
return ( return (
<> <>
@ -49,7 +52,7 @@ function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?:
ref={ref} ref={ref}
className={styles.mentionTag} className={styles.mentionTag}
onMouseEnter={() => { onMouseEnter={() => {
if (!isAudio && thumbUrl && ref.current) { if (!isAudio && showThumb && ref.current) {
const rect = ref.current.getBoundingClientRect(); const rect = ref.current.getBoundingClientRect();
setPos({ top: rect.top - 8, left: rect.left + rect.width / 2 }); setPos({ top: rect.top - 8, left: rect.left + rect.width / 2 });
setHover(true); setHover(true);
@ -59,18 +62,30 @@ function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?:
> >
{isAudio ? ( {isAudio ? (
<span style={{ marginRight: 3, fontSize: 13, verticalAlign: 'middle' }}></span> <span style={{ marginRight: 3, fontSize: 13, verticalAlign: 'middle' }}></span>
) : thumbUrl ? ( ) : showThumb ? (
<img <img
src={tosThumb(thumbUrl, 28)} src={tosThumb(thumbUrl, 28)}
alt="" alt=""
style={{ width: 14, height: 14, borderRadius: 3, objectFit: 'cover', verticalAlign: 'middle', marginRight: 3 }} style={{ width: 14, height: 14, borderRadius: 3, objectFit: 'cover', verticalAlign: 'middle', marginRight: 3 }}
onError={() => setThumbBroken(true)}
/> />
) : null} ) : isVideo ? (
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="1.5" strokeLinecap="round" style={{ verticalAlign: 'middle', marginRight: 3, opacity: 0.6 }}>
<rect x="2" y="4" width="20" height="16" rx="2" />
<path d="M10 9l5 3-5 3V9z" fill="currentColor" stroke="none" />
</svg>
) : (
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="1.5" strokeLinecap="round" style={{ verticalAlign: 'middle', marginRight: 3, opacity: 0.6 }}>
<rect x="3" y="3" width="18" height="18" rx="2" />
<circle cx="8.5" cy="8.5" r="1.5" fill="currentColor" stroke="none" />
<path d="M21 15l-5-5L5 21" />
</svg>
)}
{label} {label}
</span> </span>
{hover && thumbUrl && createPortal( {hover && showThumb && createPortal(
<div className={styles.mentionPreview} style={{ top: pos.top, left: pos.left }}> <div className={styles.mentionPreview} style={{ top: pos.top, left: pos.left }}>
<img src={tosThumb(thumbUrl, 200)} alt={label} className={styles.mentionPreviewImg} /> <img src={tosThumb(thumbUrl, 200)} alt={label} className={styles.mentionPreviewImg} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
<div className={styles.mentionPreviewLabel}>{label}</div> <div className={styles.mentionPreviewLabel}>{label}</div>
</div>, </div>,
document.body document.body
@ -149,7 +164,7 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
const [detailPos, setDetailPos] = useState({ top: 0, right: 0 }); const [detailPos, setDetailPos] = useState({ top: 0, right: 0 });
const detailLinkRef = useRef<HTMLSpanElement>(null); const detailLinkRef = useRef<HTMLSpanElement>(null);
const detailLeaveTimer = useRef<ReturnType<typeof setTimeout> | null>(null); const detailLeaveTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
const [refPreview, setRefPreview] = useState<{ url: string; label: string; type: string; top: number; left: number } | null>(null); const [refPreview, setRefPreview] = useState<{ url: string; label: string; type: string; top: number; left: number; isAssetRef?: boolean } | null>(null);
const startDetailLeave = useCallback(() => { const startDetailLeave = useCallback(() => {
if (detailLeaveTimer.current) clearTimeout(detailLeaveTimer.current); if (detailLeaveTimer.current) clearTimeout(detailLeaveTimer.current);
@ -294,11 +309,11 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
onMouseEnter={(e) => { onMouseEnter={(e) => {
if (ref.type === 'audio') return; if (ref.type === 'audio') return;
const rect = e.currentTarget.getBoundingClientRect(); const rect = e.currentTarget.getBoundingClientRect();
setRefPreview({ url: ref.previewUrl, label: ref.label, type: ref.type, top: rect.top - 8, left: rect.left + rect.width / 2 }); setRefPreview({ url: ref.previewUrl, label: ref.label, type: ref.type, top: rect.top - 8, left: rect.left + rect.width / 2, isAssetRef: ref.isAssetRef });
}} }}
onMouseLeave={() => setRefPreview(null)} onMouseLeave={() => setRefPreview(null)}
> >
{ref.type === 'video' ? ( {ref.type === 'video' && !ref.isAssetRef ? (
<video src={ref.previewUrl} className={styles.refMedia} muted /> <video src={ref.previewUrl} className={styles.refMedia} muted />
) : ref.type === 'audio' ? ( ) : ref.type === 'audio' ? (
<div className={styles.audioThumb}> <div className={styles.audioThumb}>
@ -309,7 +324,7 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
</svg> </svg>
</div> </div>
) : ( ) : (
<img src={tosThumb(ref.previewUrl, 112)} alt={ref.label} className={styles.refMedia} /> <img src={tosThumb(ref.previewUrl, 112)} alt={ref.label} className={styles.refMedia} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
)} )}
</div> </div>
))} ))}
@ -374,7 +389,7 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
<span></span><span>{task.duration}s</span> <span></span><span>{task.duration}s</span>
</div> </div>
<div className={styles.detailRow}> <div className={styles.detailRow}>
<span></span><span>720p</span> <span></span><span>{task.resolution.toUpperCase()}</span>
</div> </div>
<div className={styles.detailRow}> <div className={styles.detailRow}>
<span></span> <span></span>
@ -421,10 +436,10 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
{/* Reference thumbnail hover preview */} {/* Reference thumbnail hover preview */}
{refPreview && createPortal( {refPreview && createPortal(
<div className={styles.mentionPreview} style={{ top: refPreview.top, left: refPreview.left }}> <div className={styles.mentionPreview} style={{ top: refPreview.top, left: refPreview.left }}>
{refPreview.type === 'video' ? ( {refPreview.type === 'video' && !refPreview.isAssetRef ? (
<video src={refPreview.url} className={styles.mentionPreviewImg} autoPlay loop muted playsInline /> <video src={refPreview.url} className={styles.mentionPreviewImg} autoPlay loop muted playsInline />
) : ( ) : (
<img src={tosThumb(refPreview.url, 300)} alt={refPreview.label} className={styles.mentionPreviewImg} /> <img src={tosThumb(refPreview.url, 300)} alt={refPreview.label} className={styles.mentionPreviewImg} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
)} )}
<div className={styles.mentionPreviewLabel}>{refPreview.label}</div> <div className={styles.mentionPreviewLabel}>{refPreview.label}</div>
</div>, </div>,

View File

@ -46,6 +46,18 @@
transition: background 0.15s, opacity 0.15s; transition: background 0.15s, opacity 0.15s;
} }
.mentionAudioIcon {
display: inline-block;
margin-right: 3px;
font-size: 13px;
vertical-align: middle;
pointer-events: none;
}
.mentionAudioIcon::before {
content: '\266B'; /* ♫ rendered via CSS, not textContent — avoids polluting prompt text */
}
.mentionImg { .mentionImg {
width: 16px; width: 16px;
height: 16px; height: 16px;

View File

@ -88,8 +88,8 @@ export function PromptInput() {
const isAudio = opts.refType === 'audio' || opts.assetType === 'Audio'; const isAudio = opts.refType === 'audio' || opts.assetType === 'Audio';
if (isAudio) { if (isAudio) {
const icon = document.createElement('span'); const icon = document.createElement('span');
icon.textContent = '\u266B'; icon.className = styles.mentionAudioIcon;
icon.style.cssText = 'margin-right:3px;font-size:13px;vertical-align:middle;pointer-events:none'; icon.setAttribute('aria-hidden', 'true');
span.appendChild(icon); span.appendChild(icon);
} else if (opts.thumbUrl) { } else if (opts.thumbUrl) {
const img = document.createElement('img'); const img = document.createElement('img');
@ -98,6 +98,7 @@ export function PromptInput() {
img.setAttribute('width', '16'); img.setAttribute('width', '16');
img.setAttribute('height', '16'); img.setAttribute('height', '16');
img.style.cssText = 'width:16px;height:16px;border-radius:3px;object-fit:cover;vertical-align:middle;margin-right:3px;display:inline-block;pointer-events:none'; img.style.cssText = 'width:16px;height:16px;border-radius:3px;object-fit:cover;vertical-align:middle;margin-right:3px;display:inline-block;pointer-events:none';
img.onerror = () => { img.style.display = 'none'; };
span.appendChild(img); span.appendChild(img);
} }
// @ 前缀隐藏textContent 保留用于模式匹配,视觉上不显示) // @ 前缀隐藏textContent 保留用于模式匹配,视觉上不显示)
@ -253,6 +254,27 @@ export function PromptInput() {
if (!el) return; if (!el) return;
setPrompt(el.textContent || ''); setPrompt(el.textContent || '');
setEditorHtml(el.innerHTML); setEditorHtml(el.innerHTML);
// Sync assetMentions from DOM — prevents stale refs after deleting @mention spans
const mentions: Record<string, unknown>[] = [];
el.querySelectorAll('[data-ref-type="asset"]').forEach((span) => {
const s = span as HTMLElement;
if (s.dataset.assetId) {
mentions.push({
assetId: s.dataset.assetId,
label: s.dataset.assetName || s.textContent?.replace('@', '') || '',
thumbUrl: s.dataset.thumbUrl || '',
assetType: s.dataset.assetType || 'Image',
duration: parseFloat(s.dataset.duration || '0'),
});
} else if (s.dataset.assetGroupId) {
mentions.push({
groupId: s.dataset.assetGroupId,
label: s.dataset.groupName || s.textContent?.replace('@', '') || '',
thumbUrl: s.dataset.thumbUrl || '',
});
}
});
useInputBarStore.setState({ assetMentions: mentions });
}, [setPrompt, setEditorHtml]); }, [setPrompt, setEditorHtml]);
// Remove orphaned mention spans when a reference is deleted // Remove orphaned mention spans when a reference is deleted

View File

@ -73,6 +73,7 @@ export function RecordDetailModal({ record: r, onClose, showTeam, showCost }: Pr
<InfoItem label="模型" value={r.model === 'seedance_2.0_fast' ? 'AirDrama Fast' : 'AirDrama'} /> <InfoItem label="模型" value={r.model === 'seedance_2.0_fast' ? 'AirDrama Fast' : 'AirDrama'} />
<InfoItem label="模式" value={MODE_MAP[r.mode] || r.mode} /> <InfoItem label="模式" value={MODE_MAP[r.mode] || r.mode} />
<InfoItem label="比例" value={r.aspect_ratio || '-'} /> <InfoItem label="比例" value={r.aspect_ratio || '-'} />
<InfoItem label="分辨率" value={r.resolution ? r.resolution.toUpperCase() : '-'} />
<InfoItem label="时长" value={r.duration != null ? `${r.duration}` : '-'} /> <InfoItem label="时长" value={r.duration != null ? `${r.duration}` : '-'} />
<InfoItem label="Tokens" value={(r.tokens_consumed || 0).toLocaleString()} /> <InfoItem label="Tokens" value={(r.tokens_consumed || 0).toLocaleString()} />
{showCost && <InfoItem label="费用" value={`¥${(r.cost_amount || 0).toFixed(2)}`} />} {showCost && <InfoItem label="费用" value={`¥${(r.cost_amount || 0).toFixed(2)}`} />}

View File

@ -62,37 +62,37 @@
padding-bottom: 8px; padding-bottom: 8px;
} }
/* Quota display */ /* Quota display — 今日剩余生成次数v0.10.0 起次数制) */
.quota { .quota {
display: flex; display: flex;
flex-direction: column; flex-direction: column;
align-items: center; align-items: center;
gap: 2px; gap: 3px;
cursor: pointer; cursor: pointer;
padding: 8px 4px; padding: 8px 4px;
border-radius: 8px; border-radius: 8px;
transition: background 0.15s; transition: background 0.15s;
min-width: 56px;
} }
.quota:hover { .quota:hover {
background: rgba(255, 255, 255, 0.04); background: rgba(255, 255, 255, 0.04);
} }
.diamondIcon {
flex-shrink: 0;
}
.quotaNumber { .quotaNumber {
font-size: 14px; font-size: 18px;
font-weight: 600; font-weight: 600;
color: var(--color-text-primary); color: var(--color-text-primary);
line-height: 1; line-height: 1;
font-variant-numeric: tabular-nums;
letter-spacing: 0.5px;
} }
.quotaLabel { .quotaLabel {
font-size: 9px; font-size: 10px;
color: var(--color-text-secondary); color: var(--color-text-secondary);
white-space: nowrap; white-space: nowrap;
letter-spacing: 0.5px;
} }
/* Admin button */ /* Admin button */

View File

@ -12,8 +12,11 @@ export function Sidebar() {
const isActive = (path: string) => location.pathname === path; const isActive = (path: string) => location.pathname === path;
const role = user?.role; const role = user?.role;
// 今日剩余生成次数v0.10.0 起计费体系为次数+金额,不再是秒数池)
const dailyRemaining = quota const dailyRemaining = quota
? (quota.daily_seconds_limit === -1 ? Infinity : Math.max(0, quota.daily_seconds_limit - quota.daily_seconds_used)) ? (quota.daily_generation_limit === -1
? Infinity
: Math.max(0, quota.daily_generation_limit - quota.daily_generation_used))
: 0; : 0;
return ( return (
@ -70,15 +73,15 @@ export function Sidebar() {
<div className={styles.bottom}> <div className={styles.bottom}>
{/* Quota display - not for super admin */} {/* Quota display - not for super admin */}
{role !== 'super_admin' && ( {role !== 'super_admin' && (
<div className={styles.quota} onClick={() => navigate('/profile')}> <div
<svg className={styles.diamondIcon} width="16" height="16" viewBox="0 0 24 24" fill="none"> className={styles.quota}
<path d="M6 3h12l4 8-10 12L2 11l4-8z" fill="#6c63ff" opacity="0.85" /> onClick={() => navigate('/profile')}
<path d="M2 11h20M6 3l4 8M18 3l-4 8M12 23l-4-12M12 23l4-12" stroke="#fff" strokeWidth="0.8" opacity="0.4" /> title="今日剩余生成次数(实际扣费以火山 token 消耗为准)"
</svg> >
<span className={styles.quotaNumber}> <span className={styles.quotaNumber}>
{dailyRemaining === Infinity ? '∞' : dailyRemaining.toLocaleString()} {dailyRemaining === Infinity ? '∞' : dailyRemaining.toLocaleString()}
</span> </span>
<span className={styles.quotaLabel}></span> <span className={styles.quotaLabel}></span>
</div> </div>
)} )}

View File

@ -3,7 +3,9 @@ import { useInputBarStore } from '../store/inputBar';
import { useGenerationStore } from '../store/generation'; import { useGenerationStore } from '../store/generation';
import { useAuthStore } from '../store/auth'; import { useAuthStore } from '../store/auth';
import { Dropdown } from './Dropdown'; import { Dropdown } from './Dropdown';
import type { CreationMode, AspectRatio, Duration, GenerationType, ModelOption } from '../types'; import { showToast } from './Toast';
import { parseAssetMentions } from '../lib/assetMentions';
import type { CreationMode, AspectRatio, Duration, Resolution, GenerationType, ModelOption } from '../types';
import styles from './Toolbar.module.css'; import styles from './Toolbar.module.css';
const VideoIcon = () => ( const VideoIcon = () => (
@ -70,10 +72,7 @@ const generationTypeItems = [
{ label: '视频生成', value: 'video' as GenerationType, icon: <VideoIcon /> }, { label: '视频生成', value: 'video' as GenerationType, icon: <VideoIcon /> },
]; ];
const modelItems = [ // NOTE: modelItems 在组件内部按 resolution 动态构建1080P 下 Fast 置灰)
{ label: 'AirDrama', value: 'seedance_2.0' as ModelOption, icon: <DiamondIcon /> },
{ label: 'AirDrama Fast', value: 'seedance_2.0_fast' as ModelOption, icon: <LightningIcon /> },
];
const modeItems = [ const modeItems = [
{ label: '全能参考', value: 'universal' as CreationMode, icon: <StarIcon /> }, { label: '全能参考', value: 'universal' as CreationMode, icon: <StarIcon /> },
@ -98,9 +97,20 @@ const durationItems = Array.from({ length: 12 }, (_, i) => {
return { label: `${v}s`, value: String(v) }; return { label: `${v}s`, value: String(v) };
}); });
const RESOLUTION_MAP: Record<string, [number, number]> = { // 对照 billing.py::RESOLUTION_MAP — 前端预估与后端计费保持一致
const RESOLUTION_PIXELS: Record<Resolution, Record<string, [number, number]>> = {
'480p': {
'16:9': [864, 496], '9:16': [496, 864], '4:3': [752, 560],
'1:1': [640, 640], '3:4': [560, 752], '21:9': [992, 432],
},
'720p': {
'16:9': [1280, 720], '9:16': [720, 1280], '4:3': [1112, 834], '16:9': [1280, 720], '9:16': [720, 1280], '4:3': [1112, 834],
'1:1': [960, 960], '3:4': [834, 1112], '21:9': [1470, 630], '1:1': [960, 960], '3:4': [834, 1112], '21:9': [1470, 630],
},
'1080p': {
'16:9': [1920, 1080], '9:16': [1080, 1920], '4:3': [1664, 1248],
'1:1': [1440, 1440], '3:4': [1248, 1664], '21:9': [2206, 946],
},
}; };
const modeLabels: Record<CreationMode, string> = { const modeLabels: Record<CreationMode, string> = {
@ -119,33 +129,77 @@ export function Toolbar() {
const setAspectRatio = useInputBarStore((s) => s.setAspectRatio); const setAspectRatio = useInputBarStore((s) => s.setAspectRatio);
const duration = useInputBarStore((s) => s.duration); const duration = useInputBarStore((s) => s.duration);
const setDuration = useInputBarStore((s) => s.setDuration); const setDuration = useInputBarStore((s) => s.setDuration);
const resolution = useInputBarStore((s) => s.resolution);
const setResolution = useInputBarStore((s) => s.setResolution);
const isSubmittable = useInputBarStore((s) => s.canSubmit()); const isSubmittable = useInputBarStore((s) => s.canSubmit());
const triggerInsertAt = useInputBarStore((s) => s.triggerInsertAt); const triggerInsertAt = useInputBarStore((s) => s.triggerInsertAt);
const isKeyframe = mode === 'keyframe'; const isKeyframe = mode === 'keyframe';
const references = useInputBarStore((s) => s.references); const references = useInputBarStore((s) => s.references);
const editorHtml = useInputBarStore((s) => s.editorHtml);
const team = useAuthStore((s) => s.team); const team = useAuthStore((s) => s.team);
const addTask = useGenerationStore((s) => s.addTask); const addTask = useGenerationStore((s) => s.addTask);
const estimatedTokens = useMemo(() => { const estimatedTokens = useMemo(() => {
const res = RESOLUTION_MAP[aspectRatio] || [1280, 720]; // 官方公式:`(输入视频时长 + 输出视频时长) ××× 24fps / 1024`
return Math.round((res[0] * res[1] * 24 * duration) / 1024); // 前后端必须一致(和 backend/utils/billing.py::estimate_tokens 对齐)。
}, [aspectRatio, duration]); // 输入视频时长 = 直接上传的视频 references.duration + 素材库 @ 视频的 duration
// resolution / aspectRatio 都是严格类型枚举,不做 || 兜底 — bug 直接暴露。
const [w, h] = RESOLUTION_PIXELS[resolution][aspectRatio];
const refVideoDur = references
.filter((r) => r.type === 'video' && typeof r.duration === 'number')
.reduce((sum, r) => sum + (r.duration || 0), 0);
const mentionVideoDur = parseAssetMentions(editorHtml).durations.video;
const totalDuration = duration + refVideoDur + mentionVideoDur;
return Math.round((w * h * 24 * totalDuration) / 1024);
}, [aspectRatio, duration, resolution, references, editorHtml]);
// 分辨率 DropdownFast 模式下 1080P 置灰
const resolutionItems = useMemo(() => [
{ label: '480P', value: '480p' as Resolution },
{ label: '720P', value: '720p' as Resolution },
{
label: model === 'seedance_2.0_fast' ? '1080PFast 不支持)' : '1080P',
value: '1080p' as Resolution,
disabled: model === 'seedance_2.0_fast',
},
], [model]);
// 模型 Dropdown当前 1080P 时Fast 置灰1080P 仅 AirDrama 支持)
const modelItems = useMemo(() => [
{ label: 'AirDrama', value: 'seedance_2.0', icon: <DiamondIcon /> },
{
label: resolution === '1080p' ? 'AirDrama Fast不支持 1080P' : 'AirDrama Fast',
value: 'seedance_2.0_fast',
icon: <LightningIcon />,
disabled: resolution === '1080p',
},
], [resolution]);
const estimatedCost = useMemo(() => { const estimatedCost = useMemo(() => {
const hasVideoRef = references.some((r) => r.type === 'video'); const hasVideoRef = references.some((r) => r.type === 'video');
let price = team?.token_price || 0; let price = team?.token_price || 0;
if (model === 'seedance_2.0_fast') { if (model === 'seedance_2.0_fast') {
// Fast 不支持 1080p单价不分分辨率
price = hasVideoRef ? (team?.token_price_fast_video || 0) : (team?.token_price_fast || 0); price = hasVideoRef ? (team?.token_price_fast_video || 0) : (team?.token_price_fast || 0);
} else if (resolution === '1080p') {
price = hasVideoRef ? (team?.token_price_1080p_video || 0) : (team?.token_price_1080p || 0);
} else { } else {
price = hasVideoRef ? (team?.token_price_video || 0) : (team?.token_price || 0); price = hasVideoRef ? (team?.token_price_video || 0) : (team?.token_price || 0);
} }
return (estimatedTokens * price / 1000000).toFixed(2); return (estimatedTokens * price / 1000000).toFixed(2);
}, [estimatedTokens, model, references, team]); }, [estimatedTokens, model, resolution, references, team]);
const handleSend = useCallback(() => { const handleSend = useCallback(() => {
if (!isSubmittable) return; if (!isSubmittable) {
const s = useInputBarStore.getState();
if (s.mode === 'universal' && s.references.some((r) => r.type === 'audio')
&& !s.references.some((r) => r.type === 'image' || r.type === 'video')) {
showToast('音频不能作为唯一的参考素材,请同时添加图片或视频');
}
return;
}
addTask(); addTask();
}, [isSubmittable, addTask]); }, [isSubmittable, addTask]);
@ -216,6 +270,19 @@ export function Toolbar() {
} }
/> />
{/* Resolution */}
<Dropdown
items={resolutionItems}
value={resolution}
onSelect={(v) => setResolution(v as Resolution)}
minWidth={100}
trigger={
<button className={styles.btn}>
<span className={styles.label}>{resolution.toUpperCase()}</span>
</button>
}
/>
{/* Duration */} {/* Duration */}
<Dropdown <Dropdown
items={durationItems} items={durationItems}
@ -256,7 +323,7 @@ export function Toolbar() {
{isSubmittable && (team?.token_price || 0) > 0 && ( {isSubmittable && (team?.token_price || 0) > 0 && (
<span <span
style={{ fontSize: 12, color: '#8b8ea8', whiteSpace: 'nowrap', userSelect: 'none', marginRight: 16, lineHeight: 1 }} style={{ fontSize: 12, color: '#8b8ea8', whiteSpace: 'nowrap', userSelect: 'none', marginRight: 16, lineHeight: 1 }}
title={`预估公式: (宽 x 高 x 24fps x 时长) / 1024 = tokens, tokens x 单价 / 1000000 = 费用`} title={`预估公式: (宽 ×× 24fps × 时长) / 1024 = tokens, tokens × 单价 / 1000000 = 费用\n⚠ 仅为预估值,实际费用以火山 API 返回的 token 数为准`}
> >
{estimatedTokens.toLocaleString()} tokens / ¥{estimatedCost} {estimatedTokens.toLocaleString()} tokens / ¥{estimatedCost}
</span> </span>

View File

@ -220,9 +220,10 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
if (task.model) store.setModel(task.model as 'seedance_2.0' | 'seedance_2.0_fast'); if (task.model) store.setModel(task.model as 'seedance_2.0' | 'seedance_2.0_fast');
if (task.aspectRatio) store.setAspectRatio(task.aspectRatio as any); if (task.aspectRatio) store.setAspectRatio(task.aspectRatio as any);
if (task.duration) store.setDuration(task.duration); if (task.duration) store.setDuration(task.duration);
// Load references from task if (task.resolution) store.setResolution(task.resolution);
// Load references from task (exclude asset library refs — they restore via @mentions in editorHtml)
if (task.references && task.references.length > 0) { if (task.references && task.references.length > 0) {
const refs = task.references.filter(r => r.previewUrl).map(r => ({ const refs = task.references.filter(r => r.previewUrl && !r.isAssetRef).map(r => ({
id: r.id, id: r.id,
file: null as unknown as File, file: null as unknown as File,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,
@ -485,7 +486,7 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
{task.references.map((ref) => ( {task.references.map((ref) => (
<div key={ref.id} className={styles.refItem}> <div key={ref.id} className={styles.refItem}>
<div style={{ position: 'relative', width: 56, height: 56 }}> <div style={{ position: 'relative', width: 56, height: 56 }}>
{ref.type === 'video' ? ( {ref.type === 'video' && !ref.isAssetRef ? (
<video src={ref.previewUrl} className={styles.refImg} muted style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'video' })} /> <video src={ref.previewUrl} className={styles.refImg} muted style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'video' })} />
) : ref.type === 'audio' ? ( ) : ref.type === 'audio' ? (
<div className={styles.refAudioPlaceholder} style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'audio' })}> <div className={styles.refAudioPlaceholder} style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'audio' })}>
@ -496,7 +497,7 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
</svg> </svg>
</div> </div>
) : ref.previewUrl ? ( ) : ref.previewUrl ? (
<img src={tosThumb(ref.previewUrl, 300)} alt={ref.label} className={styles.refImg} style={{ cursor: 'zoom-in' }} onClick={() => setLightboxSrc(ref.previewUrl)} /> <img src={tosThumb(ref.previewUrl, 300)} alt={ref.label} className={styles.refImg} style={{ cursor: 'zoom-in' }} onClick={() => setLightboxSrc(ref.previewUrl)} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
) : ( ) : (
<div className={styles.refAudioPlaceholder} style={{ fontSize: 12, color: 'var(--color-text-disabled)' }}></div> <div className={styles.refAudioPlaceholder} style={{ fontSize: 12, color: 'var(--color-text-disabled)' }}></div>
)} )}
@ -536,6 +537,8 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
<span>{task.duration}s</span> <span>{task.duration}s</span>
<span className={styles.infoBarDot} /> <span className={styles.infoBarDot} />
<span>{task.aspectRatio}</span> <span>{task.aspectRatio}</span>
<span className={styles.infoBarDot} />
<span>{task.resolution.toUpperCase()}</span>
{(task.tokensConsumed ?? 0) > 0 && ( {(task.tokensConsumed ?? 0) > 0 && (
<> <>
<span>{(task.tokensConsumed ?? 0).toLocaleString()} tokens</span> <span>{(task.tokensConsumed ?? 0).toLocaleString()} tokens</span>

View File

@ -17,6 +17,10 @@
flex: 1; flex: 1;
overflow-y: auto; overflow-y: auto;
overflow-x: hidden; overflow-x: hidden;
/* 关掉浏览器自动 scroll anchoring往上加载历史时由 handleScroll 里的
anchor 逻辑统一管避免浏览器默认的 anchor 与我们手动 +diff 叠加
导致慢速滚动 / 慢拖滑动条时页面被推到最底部 */
overflow-anchor: none;
} }
.emptyArea { .emptyArea {

View File

@ -20,8 +20,11 @@ export function VideoGenerationPage() {
const regenerate = useGenerationStore((s) => s.regenerate); const regenerate = useGenerationStore((s) => s.regenerate);
const removeTask = useGenerationStore((s) => s.removeTask); const removeTask = useGenerationStore((s) => s.removeTask);
const scrollRef = useRef<HTMLDivElement>(null); const scrollRef = useRef<HTMLDivElement>(null);
const prevCountRef = useRef(tasks.length); const prevLastIdRef = useRef<string | null>(null);
const initialLoadRef = useRef(true); const initialLoadRef = useRef(true);
// 防重入 flagloadMore + anchor 期间handleScroll 多次触发不再 schedule 多个 rAF
// 避免 anchor 累加把页面推到底(慢速滚轮 / 慢拖滑动条场景)。
const loadMoreInFlightRef = useRef(false);
const savedScrollTop = useGenerationStore((s) => s.savedScrollTop); const savedScrollTop = useGenerationStore((s) => s.savedScrollTop);
const saveScrollPosition = useGenerationStore((s) => s.saveScrollPosition); const saveScrollPosition = useGenerationStore((s) => s.saveScrollPosition);
const [detailTaskId, setDetailTaskId] = useState<string | null>(null); const [detailTaskId, setDetailTaskId] = useState<string | null>(null);
@ -36,9 +39,14 @@ export function VideoGenerationPage() {
loadTasks(); loadTasks();
}, [loadTasks]); }, [loadTasks]);
// Restore scroll position after initial load, or scroll to bottom for new tasks // Restore scroll position after initial load, or scroll to bottom ONLY when a new task
// is appended at the tail. 通过比较末尾 task 的 id 来判断 —— 头部加载历史prepend
// 任务状态更新(如轮询完成)、删除某条都不会改变末尾 id因此不会触发滚动
// 避免用户往上翻时被突然拽回底部。
useEffect(() => { useEffect(() => {
if (tasks.length === 0) return; if (tasks.length === 0) return;
const currentLastId = tasks[tasks.length - 1]?.id ?? null;
if (initialLoadRef.current) { if (initialLoadRef.current) {
initialLoadRef.current = false; initialLoadRef.current = false;
// Use requestAnimationFrame to ensure DOM has rendered // Use requestAnimationFrame to ensure DOM has rendered
@ -50,15 +58,16 @@ export function VideoGenerationPage() {
scrollRef.current.scrollTop = scrollRef.current.scrollHeight; scrollRef.current.scrollTop = scrollRef.current.scrollHeight;
} }
}); });
prevCountRef.current = tasks.length; prevLastIdRef.current = currentLastId;
return; return;
} }
if (tasks.length > prevCountRef.current && scrollRef.current) {
if (currentLastId !== prevLastIdRef.current && scrollRef.current) {
scrollRef.current.scrollTo({ top: scrollRef.current.scrollHeight, behavior: 'smooth' }); scrollRef.current.scrollTo({ top: scrollRef.current.scrollHeight, behavior: 'smooth' });
} }
prevCountRef.current = tasks.length; prevLastIdRef.current = currentLastId;
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [tasks.length]); }, [tasks]);
// Save scroll position + auto-load older tasks when scrolled near top // Save scroll position + auto-load older tasks when scrolled near top
const handleScroll = useCallback(() => { const handleScroll = useCallback(() => {
@ -70,15 +79,20 @@ export function VideoGenerationPage() {
const distanceFromBottom = el.scrollHeight - el.scrollTop - el.clientHeight; const distanceFromBottom = el.scrollHeight - el.scrollTop - el.clientHeight;
setShowScrollBottom(distanceFromBottom > 300); setShowScrollBottom(distanceFromBottom > 300);
// Trigger loadMore when scrolled within 100px of the top // Trigger loadMore when scrolled within 100px of the top.
if (scrollRef.current.scrollTop < 100) { // ref flag 守卫:只有第一次进入分支时才 schedule loadMore + anchor
// 后续 handleScroll慢速操作下持续触发直接跳过避免多次 rAF 排队累加 +diff。
if (scrollRef.current.scrollTop < 100 && !loadMoreInFlightRef.current) {
loadMoreInFlightRef.current = true;
const el = scrollRef.current; const el = scrollRef.current;
const prevHeight = el.scrollHeight; const prevHeight = el.scrollHeight;
loadMore().then(() => { loadMore().then(() => {
// After older tasks are prepended, restore visual position so user doesn't jump // After older tasks are prepended, restore visual position so user doesn't jump.
// CSS overflow-anchor: none 已禁用浏览器自动 anchor由这里独立完成。
requestAnimationFrame(() => { requestAnimationFrame(() => {
const diff = el.scrollHeight - prevHeight; const diff = el.scrollHeight - prevHeight;
if (diff > 0) el.scrollTop += diff; if (diff > 0) el.scrollTop += diff;
loadMoreInFlightRef.current = false;
}); });
}); });
} }

View File

@ -146,6 +146,7 @@ export const videoApi = {
model: string; model: string;
aspect_ratio: string; aspect_ratio: string;
duration: number; duration: number;
resolution: string;
references: { url: string; type: string; role: string; label: string; thumb_url?: string; duration?: string }[]; references: { url: string; type: string; role: string; label: string; thumb_url?: string; duration?: string }[];
search_mode?: string; search_mode?: string;
seed?: number; seed?: number;

View File

@ -31,14 +31,23 @@ function VideoThumbnail({ video, onClick }: { video: AssetVideo; onClick: () =>
); );
} }
function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://');
}
function assetVideoToTask(v: AssetVideo): GenerationTask { function assetVideoToTask(v: AssetVideo): GenerationTask {
const references = (v.reference_urls || []).map((ref, i) => ({ const references = (v.reference_urls || []).map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${v.task_id}_${i}`, id: `ref_${v.task_id}_${i}`,
type: (ref.type || 'image') as 'image' | 'video', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url, previewUrl: assetRef ? (ref.thumb_url || '') : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
return { return {
id: String(v.id), id: String(v.id),
taskId: v.task_id, taskId: v.task_id,
@ -48,6 +57,7 @@ function assetVideoToTask(v: AssetVideo): GenerationTask {
model: 'seedance_2.0', model: 'seedance_2.0',
aspectRatio: (v.aspect_ratio as any) || '16:9', aspectRatio: (v.aspect_ratio as any) || '16:9',
duration: v.duration as any, duration: v.duration as any,
resolution: v.resolution,
references, references,
assetMentions: [], assetMentions: [],
status: 'completed', status: 'completed',

View File

@ -153,10 +153,10 @@ export function ProfilePage() {
{/* Quota warning */} {/* Quota warning */}
{dailyPercent >= 80 && dailyPercent < 100 && ( {dailyPercent >= 80 && dailyPercent < 100 && (
<div className={styles.warningBanner}>使 {dailyPercent.toFixed(0)}%使</div> <div className={styles.warningBanner}> {dailyPercent.toFixed(0)}%使</div>
)} )}
{dailyPercent >= 100 && ( {dailyPercent >= 100 && (
<div className={styles.dangerBanner}></div> <div className={styles.dangerBanner}></div>
)} )}
{/* Consumption Overview */} {/* Consumption Overview */}
@ -227,6 +227,7 @@ export function ProfilePage() {
</div> </div>
<div className={styles.recordRight}> <div className={styles.recordRight}>
<span className={styles.recordSeconds}>¥{(r.cost_amount || 0).toFixed(2)}</span> <span className={styles.recordSeconds}>¥{(r.cost_amount || 0).toFixed(2)}</span>
{r.resolution && <span className={styles.recordMode}>{r.resolution.toUpperCase()}</span>}
<span className={styles.recordMode}>{r.mode === 'universal' ? '全能参考' : '首尾帧'}</span> <span className={styles.recordMode}>{r.mode === 'universal' ? '全能参考' : '首尾帧'}</span>
<span className={`${styles.recordStatus} ${styles[r.status]}`}>{statusMap[r.status]}</span> <span className={`${styles.recordStatus} ${styles[r.status]}`}>{statusMap[r.status]}</span>
</div> </div>

View File

@ -60,7 +60,7 @@ export function RecordsPage() {
team_id: teamFilter ? Number(teamFilter) : undefined, team_id: teamFilter ? Number(teamFilter) : undefined,
}); });
const header = '任务ID,提交时间,完成时间,耗时,团队,用户名,模型,视频时长(秒),模式,比例,消费秒数,Tokens,费用(元),成本(元),利润(元),种子值,状态,提示词,失败原因,原始错误,参考素材数\n'; const header = '任务ID,提交时间,完成时间,耗时,团队,用户名,模型,视频时长(秒),模式,比例,分辨率,消费秒数,Tokens,费用(元),成本(元),利润(元),种子值,状态,提示词,失败原因,原始错误,参考素材数\n';
const rows = data.results.map((r) => { const rows = data.results.map((r) => {
const esc = (s: string) => s.replace(/"/g, '""').replace(/^[=+\-@]/, "'$&"); const esc = (s: string) => s.replace(/"/g, '""').replace(/^[=+\-@]/, "'$&");
const modeLabel = r.mode === 'universal' ? '全能参考' : '首尾帧'; const modeLabel = r.mode === 'universal' ? '全能参考' : '首尾帧';
@ -70,7 +70,8 @@ export function RecordsPage() {
const elapsed = r.completed_at ? Math.round((new Date(r.completed_at).getTime() - new Date(r.created_at).getTime()) / 1000) + '秒' : ''; const elapsed = r.completed_at ? Math.round((new Date(r.completed_at).getTime() - new Date(r.created_at).getTime()) / 1000) + '秒' : '';
const completedAt = r.completed_at ? new Date(r.completed_at).toLocaleString('zh-CN') : ''; const completedAt = r.completed_at ? new Date(r.completed_at).toLocaleString('zh-CN') : '';
const refCount = (r.reference_urls || []).length; const refCount = (r.reference_urls || []).length;
return `"${r.ark_task_id || ''}","${new Date(r.created_at).toLocaleString('zh-CN')}","${completedAt}","${elapsed}","${r.team_name || '-'}","${r.username}","${modelLabel}","${r.duration ?? ''}","${modeLabel}","${r.aspect_ratio || ''}","${r.seconds_consumed}","${r.tokens_consumed || 0}","${(r.cost_amount || 0).toFixed(2)}","${(r.base_cost_amount || 0).toFixed(2)}","${profit}","${r.seed != null && r.seed !== -1 ? r.seed : ''}","${statusLabel}","${esc(r.prompt || '')}","${esc(r.error_message || '')}","${esc(r.raw_error || '')}","${refCount}"`; const resolutionLabel = r.resolution ? r.resolution.toUpperCase() : '';
return `"${r.ark_task_id || ''}","${new Date(r.created_at).toLocaleString('zh-CN')}","${completedAt}","${elapsed}","${r.team_name || '-'}","${r.username}","${modelLabel}","${r.duration ?? ''}","${modeLabel}","${r.aspect_ratio || ''}","${resolutionLabel}","${r.seconds_consumed}","${r.tokens_consumed || 0}","${(r.cost_amount || 0).toFixed(2)}","${(r.base_cost_amount || 0).toFixed(2)}","${profit}","${r.seed != null && r.seed !== -1 ? r.seed : ''}","${statusLabel}","${esc(r.prompt || '')}","${esc(r.error_message || '')}","${esc(r.raw_error || '')}","${refCount}"`;
}).join('\n'); }).join('\n');
const blob = new Blob(['\uFEFF' + header + rows], { type: 'text/csv;charset=utf-8;' }); const blob = new Blob(['\uFEFF' + header + rows], { type: 'text/csv;charset=utf-8;' });

View File

@ -14,6 +14,8 @@ export function SettingsPage() {
base_token_price_video: 0, base_token_price_video: 0,
base_token_price_fast: 0, base_token_price_fast: 0,
base_token_price_fast_video: 0, base_token_price_fast_video: 0,
base_token_price_1080p: 0,
base_token_price_1080p_video: 0,
announcement: '', announcement: '',
announcement_enabled: false, announcement_enabled: false,
max_desktop_sessions: 1, max_desktop_sessions: 1,
@ -143,7 +145,7 @@ export function SettingsPage() {
/> />
</div> </div>
</div> </div>
<p className={styles.cardDesc}>Seedance 2.0</p> <p className={styles.cardDesc}>Seedance 2.0480P / 720P</p>
<div className={styles.formRow}> <div className={styles.formRow}>
<div className={styles.formGroup}> <div className={styles.formGroup}>
<label> (/tokens)</label> <label> (/tokens)</label>
@ -164,7 +166,28 @@ export function SettingsPage() {
/> />
</div> </div>
</div> </div>
<p className={styles.cardDesc}>Seedance 2.0 Fast</p> <p className={styles.cardDesc}>Seedance 2.01080P</p>
<div className={styles.formRow}>
<div className={styles.formGroup}>
<label> (/tokens)</label>
<input
type="number"
step="0.01"
value={settings.base_token_price_1080p}
onChange={(e) => setSettings({ ...settings, base_token_price_1080p: Number(e.target.value) })}
/>
</div>
<div className={styles.formGroup}>
<label> (/tokens)</label>
<input
type="number"
step="0.01"
value={settings.base_token_price_1080p_video}
onChange={(e) => setSettings({ ...settings, base_token_price_1080p_video: Number(e.target.value) })}
/>
</div>
</div>
<p className={styles.cardDesc}>Seedance 2.0 Fast 1080P</p>
<div className={styles.formRow}> <div className={styles.formRow}>
<div className={styles.formGroup}> <div className={styles.formGroup}>
<label> (/tokens)</label> <label> (/tokens)</label>

View File

@ -31,14 +31,23 @@ function VideoThumbnail({ video, onClick }: { video: AssetVideo; onClick: () =>
); );
} }
function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://');
}
function assetVideoToTask(v: AssetVideo): GenerationTask { function assetVideoToTask(v: AssetVideo): GenerationTask {
const references = (v.reference_urls || []).map((ref, i) => ({ const references = (v.reference_urls || []).map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${v.task_id}_${i}`, id: `ref_${v.task_id}_${i}`,
type: (ref.type || 'image') as 'image' | 'video', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url, previewUrl: assetRef ? (ref.thumb_url || '') : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
return { return {
id: String(v.id), id: String(v.id),
taskId: v.task_id, taskId: v.task_id,
@ -48,6 +57,7 @@ function assetVideoToTask(v: AssetVideo): GenerationTask {
model: 'seedance_2.0', model: 'seedance_2.0',
aspectRatio: (v.aspect_ratio as any) || '16:9', aspectRatio: (v.aspect_ratio as any) || '16:9',
duration: v.duration as any, duration: v.duration as any,
resolution: v.resolution,
references, references,
assetMentions: [], assetMentions: [],
status: 'completed', status: 'completed',

View File

@ -49,7 +49,7 @@ export function TeamRecordsPage() {
end_date: endDate || undefined, end_date: endDate || undefined,
}); });
const header = '任务ID,提交时间,完成时间,耗时,用户名,模型,视频时长(秒),模式,比例,消费秒数,Tokens,费用(元),种子值,状态,提示词,失败原因,原始错误,参考素材数\n'; const header = '任务ID,提交时间,完成时间,耗时,用户名,模型,视频时长(秒),模式,比例,分辨率,消费秒数,Tokens,费用(元),种子值,状态,提示词,失败原因,原始错误,参考素材数\n';
const rows = data.results.map((r) => { const rows = data.results.map((r) => {
const esc = (s: string) => s.replace(/"/g, '""').replace(/^[=+\-@]/, "'$&"); const esc = (s: string) => s.replace(/"/g, '""').replace(/^[=+\-@]/, "'$&");
const modeLabel = r.mode === 'universal' ? '全能参考' : '首尾帧'; const modeLabel = r.mode === 'universal' ? '全能参考' : '首尾帧';
@ -58,7 +58,8 @@ export function TeamRecordsPage() {
const elapsed = r.completed_at ? Math.round((new Date(r.completed_at).getTime() - new Date(r.created_at).getTime()) / 1000) + '秒' : ''; const elapsed = r.completed_at ? Math.round((new Date(r.completed_at).getTime() - new Date(r.created_at).getTime()) / 1000) + '秒' : '';
const completedAt = r.completed_at ? new Date(r.completed_at).toLocaleString('zh-CN') : ''; const completedAt = r.completed_at ? new Date(r.completed_at).toLocaleString('zh-CN') : '';
const refCount = (r.reference_urls || []).length; const refCount = (r.reference_urls || []).length;
return `"${r.ark_task_id || ''}","${new Date(r.created_at).toLocaleString('zh-CN')}","${completedAt}","${elapsed}","${r.username}","${modelLabel}","${r.duration ?? ''}","${modeLabel}","${r.aspect_ratio || ''}","${r.seconds_consumed}","${r.tokens_consumed || 0}","${(r.cost_amount || 0).toFixed(2)}","${r.seed != null && r.seed !== -1 ? r.seed : ''}","${statusLabel}","${esc(r.prompt || '')}","${esc(r.error_message || '')}","${esc(r.raw_error || '')}","${refCount}"`; const resolutionLabel = r.resolution ? r.resolution.toUpperCase() : '';
return `"${r.ark_task_id || ''}","${new Date(r.created_at).toLocaleString('zh-CN')}","${completedAt}","${elapsed}","${r.username}","${modelLabel}","${r.duration ?? ''}","${modeLabel}","${r.aspect_ratio || ''}","${resolutionLabel}","${r.seconds_consumed}","${r.tokens_consumed || 0}","${(r.cost_amount || 0).toFixed(2)}","${r.seed != null && r.seed !== -1 ? r.seed : ''}","${statusLabel}","${esc(r.prompt || '')}","${esc(r.error_message || '')}","${esc(r.raw_error || '')}","${refCount}"`;
}).join('\n'); }).join('\n');
const blob = new Blob(['\uFEFF' + header + rows], { type: 'text/csv;charset=utf-8;' }); const blob = new Blob(['\uFEFF' + header + rows], { type: 'text/csv;charset=utf-8;' });

View File

@ -32,7 +32,7 @@ function mapErrorMessage(raw?: string): string | undefined {
// Model / generation errors // Model / generation errors
if (s.includes('quota') || s.includes('insufficient')) if (s.includes('quota') || s.includes('insufficient'))
return '不足,请联系管理员'; return '今日生成次数或团队余额不足,请联系管理员';
// If already Chinese, return as-is // If already Chinese, return as-is
if (/[\u4e00-\u9fa5]/.test(raw)) return raw; if (/[\u4e00-\u9fa5]/.test(raw)) return raw;
@ -59,7 +59,7 @@ function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://'); return url.startsWith('asset://') || url.startsWith('Asset://');
} }
/** Build ReferenceSnapshot[] from raw reference_urls, excluding asset refs. */ /** Build ReferenceSnapshot[] from raw reference_urls (including asset refs with thumb_url). */
function buildReferenceSnapshots( function buildReferenceSnapshots(
refs: Array<Record<string, string>>, refs: Array<Record<string, string>>,
taskId: string, taskId: string,
@ -67,15 +67,23 @@ function buildReferenceSnapshots(
return refs return refs
.filter((ref) => { .filter((ref) => {
const url = ref.url || ''; const url = ref.url || '';
return !isAssetUrl(url) && url.trim() !== ''; // 素材库引用必须有 thumb_url 才能显示缩略图
if (isAssetUrl(url)) return !!(ref.thumb_url);
return url.trim() !== '';
}) })
.map((ref, i) => ({ .map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${taskId}_${i}`, id: `ref_${taskId}_${i}`,
type: (ref.type || 'image') as 'image' | 'video' | 'audio', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url || '', // 素材库引用用 thumb_url直接上传用原始 url
previewUrl: assetRef ? ref.thumb_url : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
} }
/** Extract asset mention metadata from raw reference_urls. */ /** Extract asset mention metadata from raw reference_urls. */
@ -113,6 +121,7 @@ function backendToFrontend(bt: BackendTask): GenerationTask {
model: bt.model, model: bt.model,
aspectRatio: bt.aspect_ratio as GenerationTask['aspectRatio'], aspectRatio: bt.aspect_ratio as GenerationTask['aspectRatio'],
duration: bt.duration as GenerationTask['duration'], duration: bt.duration as GenerationTask['duration'],
resolution: bt.resolution,
references, references,
assetMentions, assetMentions,
status: mapStatus(bt.status), status: mapStatus(bt.status),
@ -394,6 +403,7 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
model: input.model, model: input.model,
aspectRatio: input.aspectRatio, aspectRatio: input.aspectRatio,
duration: input.duration, duration: input.duration,
resolution: input.resolution,
references: localRefs, references: localRefs,
assetMentions: placeholderAssetMentions, assetMentions: placeholderAssetMentions,
status: 'generating', status: 'generating',
@ -513,6 +523,7 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
model: input.model, model: input.model,
aspect_ratio: input.aspectRatio, aspect_ratio: input.aspectRatio,
duration: input.duration, duration: input.duration,
resolution: input.resolution,
references: uploadedRefs, references: uploadedRefs,
search_mode: input.searchMode || 'off', search_mode: input.searchMode || 'off',
seed: input.seed ?? -1, seed: input.seed ?? -1,
@ -610,8 +621,10 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
} }
if (task.mode === 'universal') { if (task.mode === 'universal') {
// task.references only contains file refs (assets filtered in backendToFrontend) // Only include direct file refs — asset library refs are tracked via assetMentions
const references: UploadedFile[] = task.references.map((r) => ({ const references: UploadedFile[] = task.references
.filter((r) => !r.isAssetRef)
.map((r) => ({
id: r.id, id: r.id,
type: r.type, type: r.type,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,
@ -628,6 +641,7 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
editorHtml: task.prompt, editorHtml: task.prompt,
aspectRatio: task.aspectRatio, aspectRatio: task.aspectRatio,
duration: task.duration, duration: task.duration,
resolution: task.resolution,
references, references,
assetMentions: task.assetMentions || [], assetMentions: task.assetMentions || [],
// 如果 seed 开关打开且 task 有有效 seed填入否则不动 // 如果 seed 开关打开且 task 有有效 seed填入否则不动
@ -642,6 +656,7 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
editorHtml: task.editorHtml || task.prompt, editorHtml: task.editorHtml || task.prompt,
aspectRatio: task.aspectRatio, aspectRatio: task.aspectRatio,
duration: task.duration, duration: task.duration,
resolution: task.resolution,
assetMentions: [], assetMentions: [],
firstFrame: firstRef ? { id: firstRef.id, type: firstRef.type, previewUrl: firstRef.previewUrl, label: '首帧', tosUrl: firstRef.previewUrl } : null, firstFrame: firstRef ? { id: firstRef.id, type: firstRef.type, previewUrl: firstRef.previewUrl, label: '首帧', tosUrl: firstRef.previewUrl } : null,
lastFrame: lastRef ? { id: lastRef.id, type: lastRef.type, previewUrl: lastRef.previewUrl, label: '尾帧', tosUrl: lastRef.previewUrl } : null, lastFrame: lastRef ? { id: lastRef.id, type: lastRef.type, previewUrl: lastRef.previewUrl, label: '尾帧', tosUrl: lastRef.previewUrl } : null,
@ -661,8 +676,10 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
} }
// For regeneration, we need to re-submit with the same TOS URLs // For regeneration, we need to re-submit with the same TOS URLs
// Set up the input bar state, then call addTask // Only include direct file refs — asset library refs go via assetMentions fallback
const references: UploadedFile[] = task.references.map((r) => ({ const references: UploadedFile[] = task.references
.filter((r) => !r.isAssetRef)
.map((r) => ({
id: r.id, id: r.id,
type: r.type, type: r.type,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,
@ -676,6 +693,7 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
model: task.model, model: task.model,
aspectRatio: task.aspectRatio, aspectRatio: task.aspectRatio,
duration: task.duration, duration: task.duration,
resolution: task.resolution,
references: task.mode === 'universal' ? references : [], references: task.mode === 'universal' ? references : [],
assetMentions: task.assetMentions || [], assetMentions: task.assetMentions || [],
}); });

View File

@ -1,5 +1,5 @@
import { create } from 'zustand'; import { create } from 'zustand';
import type { CreationMode, ModelOption, AspectRatio, Duration, GenerationType, UploadedFile } from '../types'; import type { CreationMode, ModelOption, AspectRatio, Duration, Resolution, GenerationType, UploadedFile } from '../types';
import { showToast } from '../components/Toast'; import { showToast } from '../components/Toast';
import { mediaApi } from '../lib/api'; import { mediaApi } from '../lib/api';
import { parseAssetMentions } from '../lib/assetMentions'; import { parseAssetMentions } from '../lib/assetMentions';
@ -88,6 +88,10 @@ interface InputBarState {
setDuration: (duration: Duration) => void; setDuration: (duration: Duration) => void;
prevDuration: Duration; prevDuration: Duration;
// Resolution (480p/720p/1080p) — 1080p 仅 Seedance 2.0 支持
resolution: Resolution;
setResolution: (resolution: Resolution) => void;
// Prompt // Prompt
prompt: string; prompt: string;
setPrompt: (prompt: string) => void; setPrompt: (prompt: string) => void;
@ -145,7 +149,17 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
setMode: (mode) => set({ mode }), setMode: (mode) => set({ mode }),
model: 'seedance_2.0', model: 'seedance_2.0',
setModel: (model) => set({ model }), setModel: (model) => {
// Fast + 1080P 为非法组合官方文档约束。UI Dropdown 已对 Fast 项置灰,
// 此处为 UI 被绕过时的防御性拦截depth defense不做静默降级
// 阻止切换 + toast 引导用户手动改分辨率,让用户选择始终被尊重。
const state = get();
if (model === 'seedance_2.0_fast' && state.resolution === '1080p') {
showToast('1080P 仅 AirDrama 模型支持,请先切换分辨率到 720P 或 480P');
return;
}
set({ model });
},
aspectRatio: '21:9', aspectRatio: '21:9',
setAspectRatio: (aspectRatio) => set({ aspectRatio, prevAspectRatio: aspectRatio }), setAspectRatio: (aspectRatio) => set({ aspectRatio, prevAspectRatio: aspectRatio }),
@ -162,6 +176,17 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
}, },
prevDuration: 15, prevDuration: 15,
resolution: '720p' as Resolution,
setResolution: (resolution) => {
// Fast + 1080P 非法组合(对称 setModel 的拦截)— UI Dropdown 已置灰,此处防御性拦截
const state = get();
if (resolution === '1080p' && state.model === 'seedance_2.0_fast') {
showToast('AirDrama Fast 不支持 1080P请先切换模型到 AirDrama');
return;
}
set({ resolution });
},
prompt: '', prompt: '',
setPrompt: (prompt) => set({ prompt }), setPrompt: (prompt) => set({ prompt }),
@ -218,9 +243,43 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
}, },
removeReference: (id) => { removeReference: (id) => {
const state = get(); const state = get();
const ref = state.references.find((r) => r.id === id); const removedRef = state.references.find((r) => r.id === id);
if (ref) URL.revokeObjectURL(ref.previewUrl); if (!removedRef) return;
set({ references: state.references.filter((r) => r.id !== id) }); if (removedRef.previewUrl) URL.revokeObjectURL(removedRef.previewUrl);
// 删除后同类型的剩余引用按顺序重新编号(即梦逻辑:保持 1/2/3 连续)
const remaining = state.references.filter((r) => r.id !== id);
const labelPrefix = removedRef.type === 'image' ? '图片' : removedRef.type === 'video' ? '视频' : '音频';
const labelUpdates = new Map<string, string>(); // refId -> newLabel
let idx = 1;
const relabeled = remaining.map((r) => {
if (r.type !== removedRef.type) return r;
const newLabel = `${labelPrefix}${idx++}`;
if (r.label !== newLabel) labelUpdates.set(r.id, newLabel);
return r.label === newLabel ? r : { ...r, label: newLabel };
});
// 同步更新 editorHtml 里对应 refId 的 @mention span 文本
let newEditorHtml = state.editorHtml;
if (labelUpdates.size > 0 && newEditorHtml) {
const doc = new DOMParser().parseFromString(`<div>${newEditorHtml}</div>`, 'text/html');
const container = doc.body.firstChild as HTMLElement | null;
if (container) {
container.querySelectorAll('[data-ref-id]').forEach((span) => {
const el = span as HTMLElement;
const refId = el.dataset.refId;
if (refId && labelUpdates.has(refId)) {
const newLabel = labelUpdates.get(refId)!;
// span 结构:[icon/img] + atHidden(@) + textNode(label)
const labelNode = [...el.childNodes].reverse().find((n) => n.nodeType === Node.TEXT_NODE);
if (labelNode) labelNode.textContent = newLabel;
}
});
newEditorHtml = container.innerHTML;
}
}
set({ references: relabeled, editorHtml: newEditorHtml });
}, },
clearReferences: () => { clearReferences: () => {
const state = get(); const state = get();
@ -285,10 +344,19 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
? state.references.length > 0 ? state.references.length > 0
: state.firstFrame !== null || state.lastFrame !== null; : state.firstFrame !== null || state.lastFrame !== null;
if (!hasText && !hasFiles) return false; if (!hasText && !hasFiles) return false;
// Audio cannot be sent alone — must have image or video // Audio cannot be the only reference — Seedance API requires image or video alongside
if (state.mode === 'universal' && state.references.length > 0) { if (state.mode === 'universal') {
const hasImageOrVideo = state.references.some((r) => r.type === 'image' || r.type === 'video'); const hasAudioRef = state.references.some((r) => r.type === 'audio');
if (!hasImageOrVideo && !hasText) return false; const hasAudioAsset = (state.assetMentions || []).some((m: Record<string, string>) =>
(m.assetType || '').toLowerCase() === 'audio');
if (hasAudioRef || hasAudioAsset) {
const hasImageOrVideoRef = state.references.some((r) => r.type === 'image' || r.type === 'video');
const hasImageOrVideoAsset = (state.assetMentions || []).some((m: Record<string, string>) => {
const t = (m.assetType || '').toLowerCase();
return t === 'image' || t === 'video';
});
if (!hasImageOrVideoRef && !hasImageOrVideoAsset) return false;
}
} }
// Block submit if any reference is still uploading or failed // Block submit if any reference is still uploading or failed
if (state.references.some((r) => r.uploading || r.uploadError)) return false; if (state.references.some((r) => r.uploading || r.uploadError)) return false;
@ -355,6 +423,7 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
prevAspectRatio: '21:9', prevAspectRatio: '21:9',
duration: 15, duration: 15,
prevDuration: 15, prevDuration: 15,
resolution: '720p',
prompt: '', prompt: '',
editorHtml: '', editorHtml: '',
references: [], references: [],

View File

@ -2,6 +2,7 @@ export type CreationMode = 'universal' | 'keyframe';
export type ModelOption = 'seedance_2.0' | 'seedance_2.0_fast'; export type ModelOption = 'seedance_2.0' | 'seedance_2.0_fast';
export type AspectRatio = '16:9' | '9:16' | '1:1' | '21:9' | '4:3' | '3:4'; export type AspectRatio = '16:9' | '9:16' | '1:1' | '21:9' | '4:3' | '3:4';
export type Duration = 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15; export type Duration = 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15;
export type Resolution = '480p' | '720p' | '1080p';
export type GenerationType = 'video' | 'image'; export type GenerationType = 'video' | 'image';
export type UserRole = 'super_admin' | 'team_admin' | 'member'; export type UserRole = 'super_admin' | 'team_admin' | 'member';
@ -32,6 +33,7 @@ export interface ReferenceSnapshot {
previewUrl: string; previewUrl: string;
label: string; label: string;
role?: string; role?: string;
isAssetRef?: boolean;
} }
export interface GenerationTask { export interface GenerationTask {
@ -43,6 +45,7 @@ export interface GenerationTask {
model: ModelOption; model: ModelOption;
aspectRatio: AspectRatio; aspectRatio: AspectRatio;
duration: Duration; duration: Duration;
resolution: Resolution;
references: ReferenceSnapshot[]; references: ReferenceSnapshot[];
// eslint-disable-next-line @typescript-eslint/no-explicit-any // eslint-disable-next-line @typescript-eslint/no-explicit-any
assetMentions: Record<string, any>[]; assetMentions: Record<string, any>[];
@ -66,6 +69,7 @@ export interface BackendTask {
mode: CreationMode; mode: CreationMode;
model: ModelOption; model: ModelOption;
aspect_ratio: string; aspect_ratio: string;
resolution: Resolution;
duration: number; duration: number;
seconds_consumed: number; seconds_consumed: number;
tokens_consumed: number; tokens_consumed: number;
@ -75,7 +79,7 @@ export interface BackendTask {
result_url: string; result_url: string;
thumbnail_url: string; thumbnail_url: string;
error_message: string; error_message: string;
reference_urls: { url: string; type: string; role: string; label: string }[]; reference_urls: { url: string; type: string; role: string; label: string; thumb_url?: string }[];
is_favorited: boolean; is_favorited: boolean;
seed: number; seed: number;
created_at: string; created_at: string;
@ -112,6 +116,8 @@ export interface TeamInfo {
token_price_video: number; token_price_video: number;
token_price_fast: number; token_price_fast: number;
token_price_fast_video: number; token_price_fast_video: number;
token_price_1080p: number;
token_price_1080p_video: number;
is_active: boolean; is_active: boolean;
} }
@ -203,6 +209,7 @@ export interface AdminRecord {
mode: CreationMode; mode: CreationMode;
model: ModelOption; model: ModelOption;
aspect_ratio?: string; aspect_ratio?: string;
resolution?: Resolution;
status: 'queued' | 'processing' | 'completed' | 'failed'; status: 'queued' | 'processing' | 'completed' | 'failed';
error_message?: string; error_message?: string;
raw_error?: string; raw_error?: string;
@ -221,6 +228,8 @@ export interface SystemSettings {
base_token_price_video: number; base_token_price_video: number;
base_token_price_fast: number; base_token_price_fast: number;
base_token_price_fast_video: number; base_token_price_fast_video: number;
base_token_price_1080p: number;
base_token_price_1080p_video: number;
announcement: string; announcement: string;
announcement_enabled: boolean; announcement_enabled: boolean;
max_desktop_sessions: number; max_desktop_sessions: number;
@ -406,7 +415,8 @@ export interface AssetVideo {
seconds_consumed: number; seconds_consumed: number;
cost_amount?: number; cost_amount?: number;
aspect_ratio: string; aspect_ratio: string;
reference_urls?: { url: string; type: string; role: string; label: string }[]; resolution: Resolution;
reference_urls?: { url: string; type: string; role: string; label: string; thumb_url?: string }[];
created_at: string; created_at: string;
} }

View File

@ -0,0 +1,133 @@
/**
* Bug 2 E2E
* localhost:5173 + 127.0.0.1:8000
*/
import { test, expect, Page } from '@playwright/test';
const BASE_URL = 'http://localhost:5173';
const API_URL = 'http://127.0.0.1:8000';
const USERNAME = 'admin';
const PASSWORD = 'admin123';
const TEST_IMAGES_DIR = 'C:/Users/Air-work/AppData/Local/Temp/bug2test';
const IMG_RED = `${TEST_IMAGES_DIR}/test_red.png`;
const IMG_GREEN = `${TEST_IMAGES_DIR}/test_green.png`;
const IMG_BLUE = `${TEST_IMAGES_DIR}/test_blue.png`;
async function login(page: Page) {
const resp = await page.request.post(`${API_URL}/api/v1/auth/login`, {
data: { username: USERNAME, password: PASSWORD },
});
if (!resp.ok()) {
const errText = await resp.text();
console.log('LOGIN FAILED:', resp.status(), errText);
}
expect(resp.ok()).toBeTruthy();
const body = await resp.json();
await page.goto(BASE_URL);
await page.evaluate(({ access, refresh }) => {
localStorage.setItem('access_token', access);
localStorage.setItem('refresh_token', refresh);
}, { access: body.tokens.access, refresh: body.tokens.refresh });
await page.goto(`${BASE_URL}/app`);
await page.waitForTimeout(1500);
// 关闭初次登录可能出现的公告弹窗
const knowBtn = page.getByRole('button', { name: /我知道了|知道了|关闭/ }).first();
if (await knowBtn.isVisible().catch(() => false)) {
await knowBtn.click();
await page.waitForTimeout(300);
}
}
test.describe.serial('Bug 2: 图片删除后即梦式连续重命名', () => {
test('上传 3 张图 → 删除图片2 → 图片3 变为图片2', async ({ page }) => {
await login(page);
// 上传 3 张图
const fileInput = page.locator('input[type="file"]').first();
await fileInput.setInputFiles([IMG_RED, IMG_GREEN, IMG_BLUE]);
await page.waitForTimeout(2000); // 等图片校验和 ref 添加完成
// 展开缩略图堆栈hover 触发)
const thumbRow = page.locator('[class*="thumbRow"]').first();
await thumbRow.hover();
await page.waitForTimeout(500);
// 验证上传后初始状态图片1/图片2/图片3
const labelsInitial = await page.locator('[class*="thumbLabel"]').allTextContents();
console.log('初始标签:', labelsInitial);
expect(labelsInitial).toEqual(['图片1', '图片2', '图片3']);
// 点第 2 张图的删除按钮
const secondThumb = page.locator('[class*="thumbItem"]').nth(1);
await secondThumb.hover();
await secondThumb.locator('[class*="thumbClose"]').click({ force: true });
await page.waitForTimeout(500);
// 验证重命名后图片1/图片2原图片3
await thumbRow.hover();
await page.waitForTimeout(300);
const labelsAfterDelete = await page.locator('[class*="thumbLabel"]').allTextContents();
console.log('删除图片2后:', labelsAfterDelete);
expect(labelsAfterDelete).toEqual(['图片1', '图片2']);
expect(labelsAfterDelete.length).toBe(2);
});
test('删除图片2 后再上传 1 张 → 新图是图片3不和现有冲突', async ({ page }) => {
await login(page);
// 上传 3 张
const fileInput = page.locator('input[type="file"]').first();
await fileInput.setInputFiles([IMG_RED, IMG_GREEN, IMG_BLUE]);
await page.waitForTimeout(2000);
const thumbRow = page.locator('[class*="thumbRow"]').first();
await thumbRow.hover();
await page.waitForTimeout(500);
// 删除第 2 张
const secondThumb = page.locator('[class*="thumbItem"]').nth(1);
await secondThumb.hover();
await secondThumb.locator('[class*="thumbClose"]').click({ force: true });
await page.waitForTimeout(500);
// 再上传 1 张
await fileInput.setInputFiles([IMG_RED]);
await page.waitForTimeout(2000);
// 验证原图片1、原图片3(已改名图片2)、新图片3
await thumbRow.hover();
await page.waitForTimeout(300);
const finalLabels = await page.locator('[class*="thumbLabel"]').allTextContents();
console.log('删除后再上传:', finalLabels);
expect(finalLabels).toEqual(['图片1', '图片2', '图片3']);
// 无重复编号
expect(new Set(finalLabels).size).toBe(finalLabels.length);
});
test('删除第 1 张 → 剩余图片全部前移', async ({ page }) => {
await login(page);
const fileInput = page.locator('input[type="file"]').first();
await fileInput.setInputFiles([IMG_RED, IMG_GREEN, IMG_BLUE]);
await page.waitForTimeout(2000);
const thumbRow = page.locator('[class*="thumbRow"]').first();
await thumbRow.hover();
await page.waitForTimeout(500);
// 删除第 1 张
const firstThumb = page.locator('[class*="thumbItem"]').nth(0);
await firstThumb.hover();
await firstThumb.locator('[class*="thumbClose"]').click({ force: true });
await page.waitForTimeout(500);
await thumbRow.hover();
await page.waitForTimeout(300);
const labels = await page.locator('[class*="thumbLabel"]').allTextContents();
console.log('删除图片1后:', labels);
expect(labels).toEqual(['图片1', '图片2']);
});
});

View File

@ -0,0 +1,154 @@
/**
* 1080P E2E ****airflow-studio.test.airlabs.art使 tudou
* resolution-1080p.spec.ts CI/CD 线
*/
import { test, expect, Page } from '@playwright/test';
const BASE_URL = 'https://airflow-studio.test.airlabs.art';
const API_URL = 'https://airflow-studio-api.test.airlabs.art';
const USERNAME = 'tudou';
const PASSWORD = 'seaislee';
async function login(page: Page) {
const resp = await page.request.post(`${API_URL}/api/v1/auth/login`, {
data: { username: USERNAME, password: PASSWORD },
});
if (!resp.ok()) {
const err = await resp.text();
throw new Error(`Login failed: ${resp.status()} ${err}`);
}
const body = await resp.json();
await page.goto(BASE_URL);
await page.evaluate(({ access, refresh }) => {
localStorage.setItem('access_token', access);
localStorage.setItem('refresh_token', refresh);
}, { access: body.tokens.access, refresh: body.tokens.refresh });
await page.goto(`${BASE_URL}/app`);
await page.waitForTimeout(2000);
// 关闭公告
const knowBtn = page.getByRole('button', { name: /我知道了|知道了|关闭/ }).first();
if (await knowBtn.isVisible().catch(() => false)) {
await knowBtn.click();
await page.waitForTimeout(300);
}
}
test.describe.serial('[测试服] 1080P 分辨率支持 — tudou 团管账号', () => {
test('Sidebar 显示「今日剩余次数」(无钻石图标)', async ({ page }) => {
await login(page);
// 含"今日剩余次数"文案
await expect(page.getByText('今日剩余次数')).toBeVisible();
// 确认钻石 SVG 不存在(旧的 diamond path
const diamondPath = page.locator('path[d^="M6 3h12l4 8"]');
expect(await diamondPath.count()).toBe(0);
});
test('Toolbar 默认分辨率显示 720P', async ({ page }) => {
await login(page);
const resolutionBtn = page.getByRole('button', { name: '720P', exact: true }).first();
await expect(resolutionBtn).toBeVisible();
});
test('AirDrama 模式可切换到 1080P', async ({ page }) => {
await login(page);
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
await page.getByText('1080P', { exact: true }).click();
await page.waitForTimeout(300);
await expect(page.getByRole('button', { name: '1080P', exact: true }).first()).toBeVisible();
});
test('1080P 下 Fast 模型在 Dropdown 中置灰', async ({ page }) => {
await login(page);
// 先切 1080P
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
await page.getByText('1080P', { exact: true }).click();
await page.waitForTimeout(300);
// 打开模型 Dropdown
await page.getByRole('button', { name: /AirDrama$/, exact: false }).first().click();
await page.waitForTimeout(200);
// Fast 项应带"不支持 1080P"
await expect(page.getByText(/AirDrama Fast.*不支持 1080P/)).toBeVisible();
});
test('Fast 模式下 1080P 在 Dropdown 中置灰', async ({ page }) => {
await login(page);
// 切 Fast 模型
await page.getByRole('button', { name: /AirDrama$/, exact: false }).first().click();
await page.waitForTimeout(200);
await page.getByText('AirDrama Fast', { exact: true }).click();
await page.waitForTimeout(300);
// 打开分辨率 Dropdown
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
// 1080P 项应带"Fast 不支持"
await expect(page.getByText(/1080P.*Fast 不支持/)).toBeVisible();
});
test('ProfilePage 预警文案显示「今日生成次数」而非「额度」', async ({ page }) => {
await login(page);
await page.goto(`${BASE_URL}/profile`);
await page.waitForTimeout(1500);
// Page 应不含"今日额度"这种老文案
const body = await page.textContent('body');
// 能找到"今日"相关字样,不是"额度"
if (body && body.includes('今日')) {
// 如果出现"今日",必须是跟"次数"搭配,不是跟"额度"
expect(body).not.toMatch(/今日额度/);
}
});
test('提交 Fast+1080P 组合被后端 400 拒绝', async ({ page }) => {
await login(page);
// 直接调 API 测试(绕过前端 UI 约束,验证后端 fail loud
const loginResp = await page.request.post(`${API_URL}/api/v1/auth/login`, {
data: { username: USERNAME, password: PASSWORD },
});
const { tokens } = await loginResp.json();
const resp = await page.request.post(`${API_URL}/api/v1/video/generate`, {
headers: { Authorization: `Bearer ${tokens.access}` },
data: {
prompt: 'E2E 测试 Fast+1080P',
mode: 'universal',
model: 'seedance_2.0_fast',
aspect_ratio: '16:9',
duration: 5,
resolution: '1080p',
references: [],
},
});
expect(resp.status()).toBe(400);
const body = await resp.json();
expect(body.error).toBe('invalid_resolution');
expect(body.message).toContain('1080P');
expect(body.message).toContain('Fast');
});
test('提交 adaptive ratio 被后端 400 拒绝', async ({ page }) => {
await login(page);
const loginResp = await page.request.post(`${API_URL}/api/v1/auth/login`, {
data: { username: USERNAME, password: PASSWORD },
});
const { tokens } = await loginResp.json();
const resp = await page.request.post(`${API_URL}/api/v1/video/generate`, {
headers: { Authorization: `Bearer ${tokens.access}` },
data: {
prompt: 'E2E adaptive',
mode: 'universal',
model: 'seedance_2.0',
aspect_ratio: 'adaptive',
duration: 5,
resolution: '720p',
references: [],
},
});
expect(resp.status()).toBe(400);
});
});

View File

@ -0,0 +1,136 @@
/**
* 1080P E2E UI
* localhost:5173 + 127.0.0.1:8000
*/
import { test, expect, Page } from '@playwright/test';
const BASE_URL = 'http://localhost:5173';
const API_URL = 'http://127.0.0.1:8000';
const USERNAME = 'admin';
const PASSWORD = 'admin123';
async function login(page: Page) {
const resp = await page.request.post(`${API_URL}/api/v1/auth/login`, {
data: { username: USERNAME, password: PASSWORD },
});
if (!resp.ok()) {
const err = await resp.text();
throw new Error(`Login failed: ${resp.status()} ${err}`);
}
const body = await resp.json();
await page.goto(BASE_URL);
await page.evaluate(({ access, refresh }) => {
localStorage.setItem('access_token', access);
localStorage.setItem('refresh_token', refresh);
}, { access: body.tokens.access, refresh: body.tokens.refresh });
await page.goto(`${BASE_URL}/app`);
await page.waitForTimeout(1500);
// 关闭公告弹窗
const knowBtn = page.getByRole('button', { name: /我知道了|知道了|关闭/ }).first();
if (await knowBtn.isVisible().catch(() => false)) {
await knowBtn.click();
await page.waitForTimeout(300);
}
}
test.describe.serial('1080P 分辨率支持', () => {
test('默认分辨率显示 720P', async ({ page }) => {
await login(page);
// 找到 Toolbar 里的分辨率按钮label 应显示 720P
const resolutionBtn = page.getByRole('button', { name: '720P', exact: true }).first();
await expect(resolutionBtn).toBeVisible();
});
test('AirDrama 模式下可切换到 1080P', async ({ page }) => {
await login(page);
// 点分辨率按钮展开 dropdown
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
// 选 1080P
await page.getByText('1080P', { exact: true }).click();
await page.waitForTimeout(300);
// 分辨率按钮应显示 1080P
await expect(page.getByRole('button', { name: '1080P', exact: true }).first()).toBeVisible();
});
test('1080P 下 Fast 模型置灰UI 不可达 Fast+1080P', async ({ page }) => {
await login(page);
// 先切到 1080P
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
await page.getByText('1080P', { exact: true }).click();
await page.waitForTimeout(300);
// 打开模型 dropdown
await page.getByRole('button', { name: /AirDrama$/, exact: false }).first().click();
await page.waitForTimeout(200);
// Fast 项应包含 "不支持 1080P" 且有 disabled 视觉
const fastItem = page.getByText(/AirDrama Fast.*不支持 1080P/);
await expect(fastItem).toBeVisible();
// 点击 Fast 不应切换Dropdown 的 disabled 阻止了 onSelect
await fastItem.click({ force: true });
await page.waitForTimeout(300);
// 模型应仍是 AirDrama
await expect(page.getByRole('button', { name: /AirDrama$/, exact: false }).first()).toBeVisible();
});
test('Fast 模式下 1080P 置灰UI 不可达 Fast+1080P反向', async ({ page }) => {
await login(page);
// 先确保 resolution 是 720Preset
await page.reload();
await page.waitForTimeout(1500);
// 切到 Fast 模型
await page.getByRole('button', { name: /AirDrama$/, exact: false }).first().click();
await page.waitForTimeout(200);
await page.getByText('AirDrama Fast', { exact: true }).click();
await page.waitForTimeout(300);
// 打开分辨率 dropdown
await page.getByRole('button', { name: '720P', exact: true }).first().click();
await page.waitForTimeout(200);
// 1080P 项应带 "Fast 不支持" 标注
const disabled1080p = page.getByText(/1080P.*Fast 不支持/);
await expect(disabled1080p).toBeVisible();
// 点击不生效
await disabled1080p.click({ force: true });
await page.waitForTimeout(300);
// 分辨率仍为 720P可能 Dropdown 保持打开或关闭,但按钮不该变)
const bodyText = await page.textContent('body');
expect(bodyText).toContain('720P');
});
test('预估费用 tooltip 明示「以火山为准」', async ({ page }) => {
await login(page);
// 需要让按钮栏里的"预估"显示出来(需要有 prompt 或素材)
// 输入一个简单 prompt
const promptArea = page.locator('[contenteditable]').first();
if (await promptArea.isVisible().catch(() => false)) {
await promptArea.click();
await promptArea.type('测试提示词');
await page.waitForTimeout(300);
}
// 找到"预估消耗"文案
const estSpan = page.getByText(/预估消耗/).first();
if (await estSpan.isVisible().catch(() => false)) {
const title = await estSpan.getAttribute('title');
expect(title).toBeTruthy();
expect(title!).toContain('实际');
expect(title!).toContain('火山');
} else {
// 如果没有预估显示(比如 team 没配单价),跳过
console.log('跳过预估未显示team 可能未配单价)');
}
});
});

View File

@ -0,0 +1,234 @@
/**
* Bug 2 fix verification: 删除引用后
* editorHtml @mention span
*/
import { describe, it, expect, beforeEach } from 'vitest';
import { useInputBarStore } from '../../src/store/inputBar';
function mockFile(name: string, type = 'image/jpeg'): File {
return new File(['mock'], name, { type });
}
function mockRef(id: string, type: 'image' | 'video' | 'audio', label: string) {
return {
id,
file: mockFile(`${id}.${type === 'image' ? 'jpg' : type === 'video' ? 'mp4' : 'mp3'}`),
type,
previewUrl: `blob:${id}`,
label,
};
}
function mentionSpan(refId: string, refType: string, label: string): string {
return `<span data-ref-id="${refId}" data-ref-type="${refType}" class="mention" contenteditable="false"><span style="font-size:0;width:0;overflow:hidden;display:inline">@</span>${label}</span>`;
}
describe('removeReference — 即梦式连续重命名', () => {
beforeEach(() => {
useInputBarStore.getState().reset();
});
describe('图片重命名', () => {
it('删除图片2 后图片3 重命名为图片2references + editorHtml 同步)', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
mockRef('ref_3', 'image', '图片3'),
];
const editorHtml =
`开场 ${mentionSpan('ref_1', 'image', '图片1')}${mentionSpan('ref_2', 'image', '图片2')} 在和 ${mentionSpan('ref_3', 'image', '图片3')} 讲话`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_2');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(2);
expect(state.references[0].id).toBe('ref_1');
expect(state.references[0].label).toBe('图片1');
expect(state.references[1].id).toBe('ref_3');
expect(state.references[1].label).toBe('图片2'); // 原图片3 → 图片2
// editorHtml 里 ref_3 的 textNode 应该变成 "图片2"
expect(state.editorHtml).toContain('data-ref-id="ref_3"');
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?图片2<\/span>/);
// ref_1 保持 "图片1"
expect(state.editorHtml).toMatch(/data-ref-id="ref_1"[^>]*>[\s\S]*?图片1<\/span>/);
});
it('删除图片1 后图片2、图片3 重命名为图片1、图片2', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
mockRef('ref_3', 'image', '图片3'),
];
const editorHtml = `${mentionSpan('ref_2', 'image', '图片2')} ${mentionSpan('ref_3', 'image', '图片3')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references[0].label).toBe('图片1'); // ref_2
expect(state.references[1].label).toBe('图片2'); // ref_3
expect(state.editorHtml).toMatch(/data-ref-id="ref_2"[^>]*>[\s\S]*?图片1<\/span>/);
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?图片2<\/span>/);
});
it('删除最后一张图片(唯一图片)— references 清空editorHtml 不变', () => {
const refs = [mockRef('ref_1', 'image', '图片1')];
const editorHtml = `内容 ${mentionSpan('ref_1', 'image', '图片1')} 尾部`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(0);
// remaining 为空 → labelUpdates 为空 → 跳过 DOM 操作 → editorHtml 原样保留
expect(state.editorHtml).toBe(editorHtml);
});
});
describe('视频/音频独立编号', () => {
it('图片和视频混合时,删图片只重命名图片,视频不动', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
mockRef('ref_3', 'video', '视频1'),
];
const editorHtml =
`${mentionSpan('ref_1', 'image', '图片1')} ${mentionSpan('ref_2', 'image', '图片2')} ${mentionSpan('ref_3', 'video', '视频1')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(2);
expect(state.references[0].label).toBe('图片1'); // 原图片2
expect(state.references[1].label).toBe('视频1'); // 视频不变
expect(state.editorHtml).toMatch(/data-ref-id="ref_2"[^>]*>[\s\S]*?图片1<\/span>/);
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?视频1<\/span>/);
});
it('删除视频2视频3 重命名为视频2', () => {
const refs = [
mockRef('ref_1', 'video', '视频1'),
mockRef('ref_2', 'video', '视频2'),
mockRef('ref_3', 'video', '视频3'),
];
const editorHtml = `${mentionSpan('ref_3', 'video', '视频3')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_2');
const state = useInputBarStore.getState();
expect(state.references[0].label).toBe('视频1');
expect(state.references[1].label).toBe('视频2'); // 原视频3
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?视频2<\/span>/);
});
it('删除音频1音频2 重命名为音频1', () => {
const refs = [
mockRef('ref_1', 'audio', '音频1'),
mockRef('ref_2', 'audio', '音频2'),
];
const editorHtml = `${mentionSpan('ref_1', 'audio', '音频1')} ${mentionSpan('ref_2', 'audio', '音频2')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(1);
expect(state.references[0].id).toBe('ref_2');
expect(state.references[0].label).toBe('音频1'); // 原音频2
expect(state.editorHtml).toMatch(/data-ref-id="ref_2"[^>]*>[\s\S]*?音频1<\/span>/);
});
});
describe('边界场景', () => {
it('editorHtml 为空 — 不报错,只重命名 references', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
];
useInputBarStore.setState({ references: refs, editorHtml: '' });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(1);
expect(state.references[0].label).toBe('图片1');
expect(state.editorHtml).toBe('');
});
it('editorHtml 中没有对应的 @mention span — 只改 references', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
];
const editorHtml = '<span>纯文本,没有 mention span</span>';
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(1);
expect(state.references[0].label).toBe('图片1'); // ref_2 重命名
// editorHtml 不含对应 span无法更新但不报错
});
it('传入不存在的 id — 静默返回,状态不变', () => {
const refs = [mockRef('ref_1', 'image', '图片1')];
const editorHtml = mentionSpan('ref_1', 'image', '图片1');
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('nonexistent_id');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(1);
expect(state.references[0].label).toBe('图片1');
expect(state.editorHtml).toBe(editorHtml);
});
it('删除的图片没被 @ 到 editor其他图片仍被重命名', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
mockRef('ref_3', 'image', '图片3'),
];
// editorHtml 只 @ 了图片3没 @图片1/2
const editorHtml = `${mentionSpan('ref_3', 'image', '图片3')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references[0].label).toBe('图片1'); // 原图片2
expect(state.references[1].label).toBe('图片2'); // 原图片3
// editor 里只有 ref_3 的 span应该更新成"图片2"
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?图片2<\/span>/);
});
});
describe('连续删除(并发)', () => {
it('连续删除两张图片,剩余图片正确重编号', () => {
const refs = [
mockRef('ref_1', 'image', '图片1'),
mockRef('ref_2', 'image', '图片2'),
mockRef('ref_3', 'image', '图片3'),
mockRef('ref_4', 'image', '图片4'),
];
const editorHtml = `${mentionSpan('ref_1', 'image', '图片1')} ${mentionSpan('ref_2', 'image', '图片2')} ${mentionSpan('ref_3', 'image', '图片3')} ${mentionSpan('ref_4', 'image', '图片4')}`;
useInputBarStore.setState({ references: refs, editorHtml });
useInputBarStore.getState().removeReference('ref_2');
useInputBarStore.getState().removeReference('ref_1');
const state = useInputBarStore.getState();
expect(state.references).toHaveLength(2);
expect(state.references[0].id).toBe('ref_3');
expect(state.references[0].label).toBe('图片1');
expect(state.references[1].id).toBe('ref_4');
expect(state.references[1].label).toBe('图片2');
expect(state.editorHtml).toMatch(/data-ref-id="ref_3"[^>]*>[\s\S]*?图片1<\/span>/);
expect(state.editorHtml).toMatch(/data-ref-id="ref_4"[^>]*>[\s\S]*?图片2<\/span>/);
});
});
});

View File

@ -0,0 +1,160 @@
/**
* 1080P
*
*
* 1. / setModel/setResolution Fast+1080P
* 2. estimatedTokens
* 3. bug || '720p'
*/
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { useInputBarStore } from '../../src/store/inputBar';
// Mock Toast 避免真实 DOM 调用
vi.mock('../../src/components/Toast', () => ({
showToast: vi.fn(),
}));
describe('1080P — Store 分辨率状态', () => {
beforeEach(() => {
useInputBarStore.getState().reset();
});
it('默认分辨率是 720p', () => {
expect(useInputBarStore.getState().resolution).toBe('720p');
});
it('setResolution 能设为 480p / 720p / 1080p', () => {
const { setResolution } = useInputBarStore.getState();
setResolution('480p');
expect(useInputBarStore.getState().resolution).toBe('480p');
setResolution('1080p');
expect(useInputBarStore.getState().resolution).toBe('1080p');
setResolution('720p');
expect(useInputBarStore.getState().resolution).toBe('720p');
});
it('reset 把分辨率恢复为 720p', () => {
const { setResolution, reset } = useInputBarStore.getState();
setResolution('1080p');
reset();
expect(useInputBarStore.getState().resolution).toBe('720p');
});
});
describe('1080P — 双向拦截(原则 1不静默降级', () => {
beforeEach(() => {
useInputBarStore.getState().reset();
});
it('1080P 下切 Fast 模型应被阻止resolution 不变model 也不变', () => {
const { setResolution, setModel } = useInputBarStore.getState();
setResolution('1080p');
setModel('seedance_2.0_fast');
// 拦截成功model 保持原值resolution 不变(不降级为 720p
const state = useInputBarStore.getState();
expect(state.model).toBe('seedance_2.0');
expect(state.resolution).toBe('1080p');
});
it('Fast 模式下切 1080P 分辨率应被阻止model 不变resolution 不变', () => {
const { setModel, setResolution } = useInputBarStore.getState();
setModel('seedance_2.0_fast');
setResolution('1080p');
const state = useInputBarStore.getState();
expect(state.model).toBe('seedance_2.0_fast');
expect(state.resolution).toBe('720p'); // 仍是默认 720p没被改到 1080p
});
it('AirDrama 下切 1080P 正常生效', () => {
const { setResolution } = useInputBarStore.getState();
setResolution('1080p');
expect(useInputBarStore.getState().resolution).toBe('1080p');
});
it('1080P 下切回 AirDrama 正常生效(同模型不拦截)', () => {
const { setModel, setResolution } = useInputBarStore.getState();
setResolution('1080p');
setModel('seedance_2.0');
expect(useInputBarStore.getState().model).toBe('seedance_2.0');
expect(useInputBarStore.getState().resolution).toBe('1080p');
});
it('Fast 下切 480p/720p 正常生效(不是 1080p 不拦截)', () => {
const { setModel, setResolution } = useInputBarStore.getState();
setModel('seedance_2.0_fast');
setResolution('480p');
expect(useInputBarStore.getState().resolution).toBe('480p');
setResolution('720p');
expect(useInputBarStore.getState().resolution).toBe('720p');
});
});
describe('1080P — 官方像素值(与后端 RESOLUTION_MAP 对齐)', () => {
// 这里硬编码官方文档的像素表,作为前端契约测试
// 如果 Toolbar.tsx 的 RESOLUTION_PIXELS 改动,这些测试应该跟着更新
// 对应 backend/utils/billing.py::RESOLUTION_MAP
const EXPECTED_PIXELS = {
'480p': {
'16:9': [864, 496],
'9:16': [496, 864],
'4:3': [752, 560],
'1:1': [640, 640],
'3:4': [560, 752],
'21:9': [992, 432],
},
'720p': {
'16:9': [1280, 720],
'9:16': [720, 1280],
'4:3': [1112, 834],
'1:1': [960, 960],
'3:4': [834, 1112],
'21:9': [1470, 630],
},
'1080p': {
'16:9': [1920, 1080],
'9:16': [1080, 1920],
'4:3': [1664, 1248],
'1:1': [1440, 1440],
'3:4': [1248, 1664],
'21:9': [2206, 946], // 关键:不是 2176×928seedance 1.0 值)
},
};
// estimate_tokens 官方公式实现(对齐前端 Toolbar 和后端 billing.py
function estimateTokens(w: number, h: number, duration: number, inputVideoDuration = 0) {
return Math.round((w * h * 24 * (duration + inputVideoDuration)) / 1024);
}
it('1080P 5s 16:9 无输入视频 = 243000 tokens', () => {
const [w, h] = EXPECTED_PIXELS['1080p']['16:9'];
expect(estimateTokens(w, h, 5)).toBe(243000);
});
it('1080P 5s 16:9 含 2s 输入视频 = 340200 tokens纯公式不修正到最低 437400', () => {
const [w, h] = EXPECTED_PIXELS['1080p']['16:9'];
expect(estimateTokens(w, h, 5, 2)).toBe(340200);
});
it('720P 5s 16:9 无输入视频 = 108000 tokens', () => {
const [w, h] = EXPECTED_PIXELS['720p']['16:9'];
expect(estimateTokens(w, h, 5)).toBe(108000);
});
it('1080P 21:9 像素 = 2206×946不是 seedance 1.0 的 2176×928', () => {
expect(EXPECTED_PIXELS['1080p']['21:9']).toEqual([2206, 946]);
});
it('价格示例1080P 5s 16:9 × 51 元/百万 = 12.39 元', () => {
const tokens = 243000;
const price = 51;
const cost = (tokens * price) / 1_000_000;
expect(cost.toFixed(2)).toBe('12.39');
});
it('价格示例720P 5s 16:9 × 46 元/百万 = 4.97 元', () => {
const tokens = 108000;
const price = 46;
const cost = (tokens * price) / 1_000_000;
expect(cost.toFixed(2)).toBe('4.97');
});
});