Compare commits

..

11 Commits

Author SHA1 Message Date
zyc
2f80ae80c3 add datebase 2026-04-17 20:24:05 +08:00
seaislee1209
2281c64ee8 fix: 音频不能作为唯一参考素材 — 前端校验 + toast 提示
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 6m0s
Seedance API 不支持"纯音频"和"文本+音频"输入,必须搭配图片或视频。
- canSubmit() 校验同时检查 references 和 assetMentions
- Toolbar 点击禁用按钮时弹出 toast 提示原因

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 14:10:39 +08:00
zyc
41115faa16 add md
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 5m19s
2026-04-13 20:47:43 +08:00
seaislee1209
0b770340c8 fix: 修复资产页素材库引用不可查看 + 重新编辑素材泄漏
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m52s
1. AdminAssetsPage/TeamAssetsPage: asset:// 协议 URL 改用 thumb_url 显示缩略图
2. generation.ts reEdit/regenerate: 过滤 isAssetRef,素材库引用不混入 references 数组
3. PromptInput extractText: 实时同步 assetMentions store,删除 @标签后不再残留旧数据

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 18:33:08 +08:00
zyc
177a9c7dec feat: HTTP→HTTPS 自动跳转 — Traefik Middleware + CI/CD 部署补全
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 3m36s
- 新增 redirect-https-middleware.yaml (Traefik 301 永久重定向)
- ingress.yaml 添加 middleware annotation
- deploy.yaml 补充 cert-manager-issuer 和 redirect-middleware 的 kubectl apply

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:38:58 +08:00
zyc
a6a3928091 perf: kubectl 4s 超时 + 5 次重试,避免 K3s 内网抖动卡死部署
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m9s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:29:24 +08:00
zyc
ab1b00f94a feat: HTTP 自动跳转 HTTPS — Traefik Middleware + Ingress annotation
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:17:02 +08:00
seaislee1209
5972f45784 fix: 素材库引用缩略图烂图 + pollStatus 跨项目素材保护
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
- MentionTag: onError fallback,缩略图加载失败显示视频/图片占位图标
- createMentionSpan/VideoDetailModal: img onError 隐藏烂图
- buildReferenceSnapshots: 素材库引用用 thumb_url 做 previewUrl
- isAssetRef 标记防止视频缩略图被 <video> 渲染、重编辑防重复
- pollStatus: 已 active 的素材跳过远程查询,防止跨项目素材被误删

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:11:12 +08:00
seaislee1209
db1bbfa1d4 Merge branch 'dev' of https://gitea.airlabs.art/zyc/video-shuoshan into dev
Some checks failed
Build and Deploy / build-and-deploy (push) Has been cancelled
2026-04-04 22:18:54 +08:00
seaislee1209
4b2dd9ef5e fix: 音频 ♫ 符号溢出到 prompt 文本 — 改用 CSS ::before 渲染
createMentionSpan 里音频的 ♫ 之前用 textContent 设置,
被 extractText() 的 el.textContent 读进了 prompt 纯文本,
导致 renderPromptWithMentions 匹配后留下额外的 ♫ 字符。

改用 CSS ::before content 渲染,不参与 textContent,
prompt 里不再有多余的 ♫。

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-04 22:17:07 +08:00
zyc
3bc8b78507 perf: docker cleanup 保留基础镜像缓存,只清 dangling 镜像
All checks were successful
Build and Deploy / build-and-deploy (push) Successful in 4m4s
2026-04-04 21:54:11 +08:00
27 changed files with 49667 additions and 81 deletions

View File

@ -133,41 +133,45 @@ jobs:
sed -i "s|redis://zyc:Zyc188208@redis-shzlsczo52dft8mia.redis.ivolces.com:6379/0|${{ env.REDIS_URL }}|g" k8s/celery-deployment.yaml sed -i "s|redis://zyc:Zyc188208@redis-shzlsczo52dft8mia.redis.ivolces.com:6379/0|${{ env.REDIS_URL }}|g" k8s/celery-deployment.yaml
# All kubectl operations with retry (K3s 内网连接可能抖动) # All kubectl operations with retry (K3s 内网连接可能抖动)
for attempt in 1 2 3; do export KUBECTL_TIMEOUT="--request-timeout=4s"
echo "Deploy attempt $attempt/3..."
for attempt in 1 2 3 4 5; do
echo "Deploy attempt $attempt/5..."
{ {
# Create/update image pull secret for CR # Create/update image pull secret for CR
kubectl create secret docker-registry cr-pull-secret \ kubectl $KUBECTL_TIMEOUT create secret docker-registry cr-pull-secret \
--docker-server="${{ env.CR_SERVER_ACTIVE }}" \ --docker-server="${{ env.CR_SERVER_ACTIVE }}" \
--docker-username="${{ env.CR_USERNAME_ACTIVE }}" \ --docker-username="${{ env.CR_USERNAME_ACTIVE }}" \
--docker-password="${{ env.CR_PASSWORD_ACTIVE }}" \ --docker-password="${{ env.CR_PASSWORD_ACTIVE }}" \
--dry-run=client -o yaml | kubectl apply -f - --dry-run=client -o yaml | kubectl $KUBECTL_TIMEOUT apply -f -
# Create/update secrets (业务密钥DB 已写在 yaml 里) # Create/update secrets (业务密钥DB 已写在 yaml 里)
kubectl create secret generic video-backend-secrets \ kubectl $KUBECTL_TIMEOUT create secret generic video-backend-secrets \
--from-literal=ARK_API_KEY='${{ secrets.ARK_API_KEY }}' \ --from-literal=ARK_API_KEY='${{ secrets.ARK_API_KEY }}' \
--from-literal=TOS_ACCESS_KEY='${{ secrets.TOS_ACCESS_KEY }}' \ --from-literal=TOS_ACCESS_KEY='${{ secrets.TOS_ACCESS_KEY }}' \
--from-literal=TOS_SECRET_KEY='${{ secrets.TOS_SECRET_KEY }}' \ --from-literal=TOS_SECRET_KEY='${{ secrets.TOS_SECRET_KEY }}' \
--from-literal=DJANGO_SECRET_KEY='${{ secrets.DJANGO_SECRET_KEY }}' \ --from-literal=DJANGO_SECRET_KEY='${{ secrets.DJANGO_SECRET_KEY }}' \
--from-literal=ALIYUN_SMS_ACCESS_KEY='${{ secrets.ALIYUN_SMS_ACCESS_KEY }}' \ --from-literal=ALIYUN_SMS_ACCESS_KEY='${{ secrets.ALIYUN_SMS_ACCESS_KEY }}' \
--from-literal=ALIYUN_SMS_ACCESS_SECRET='${{ secrets.ALIYUN_SMS_ACCESS_SECRET }}' \ --from-literal=ALIYUN_SMS_ACCESS_SECRET='${{ secrets.ALIYUN_SMS_ACCESS_SECRET }}' \
--dry-run=client -o yaml | kubectl apply -f - --dry-run=client -o yaml | kubectl $KUBECTL_TIMEOUT apply -f -
# Apply manifests # Apply manifests
kubectl apply -f k8s/backend-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/cert-manager-issuer.yaml
kubectl apply -f k8s/celery-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/redirect-https-middleware.yaml
kubectl apply -f k8s/web-deployment.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/ingress.yaml kubectl $KUBECTL_TIMEOUT apply -f k8s/celery-deployment.yaml
kubectl $KUBECTL_TIMEOUT apply -f k8s/web-deployment.yaml
kubectl $KUBECTL_TIMEOUT apply -f k8s/ingress.yaml
# Preserve real client IP # Preserve real client IP
kubectl patch svc traefik -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}' 2>/dev/null || true kubectl $KUBECTL_TIMEOUT patch svc traefik -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}' 2>/dev/null || true
kubectl rollout restart deployment/video-backend kubectl $KUBECTL_TIMEOUT rollout restart deployment/video-backend
kubectl rollout restart deployment/celery-worker kubectl $KUBECTL_TIMEOUT rollout restart deployment/celery-worker
kubectl rollout restart deployment/video-web kubectl $KUBECTL_TIMEOUT rollout restart deployment/video-web
} 2>&1 | tee /tmp/deploy.log && break } 2>&1 | tee /tmp/deploy.log && break
echo "Attempt $attempt failed, retrying in 10s..." echo "Attempt $attempt failed, retrying in 30s..."
sleep 10 sleep 30
done done
# ===== Log Center: failure reporting ===== # ===== Log Center: failure reporting =====
@ -234,7 +238,7 @@ jobs:
if: always() if: always()
run: | run: |
docker container prune -f docker container prune -f
docker image prune -a -f docker image prune -f
docker builder prune -a -f docker builder prune -a -f
echo "Disk usage after cleanup:" echo "Disk usage after cleanup:"
df -h / | tail -1 df -h / | tail -1

27
airlabs-Claude.pem Normal file
View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA3wbag5McElew1MnCGi5IUMLw6kQsqI+2Mf09dX5Rfy2HQjAo
XSocUYKWBOwDUZAZGHHIN8sv7ghM58cEK22LKpAFPLfFGyD4lmHMz/6q3b78WlTW
MOK8wKuJisPUyn91uJvGk8O5dLvyZRnd5fobgChgUpXTi3Gf8rFH7gP3kPh/QxVu
ifGOQvpeEZvYYxmvBKbB4v6Mv361eAfnTb2awB4zAUGL6fpuoJL8R2tygu5kF8l6
ZELwj1FqUVOSED59zqlKDBjeD3VRxNwc1KkUtATl9HB/eBxLRX2Vxz93fLxjLYmc
F6rTwLqN6d1RIfGMMT2i+snnFZlVtCzlnKm0VwIDAQABAoIBAQCSMF3fTPhjlZNV
h4JxwtCoD3/3LwTO4JSeo847S7eD04YLfqGWn9m8HArV4xYxynCIs1x4Jfme31X5
v1weU0mbdpfYOuU6aRxJBoZH+Dhr7ZpgY6eal6T97fLUQJUkvvOdNI6voOXZfLDg
UFpBOvX3xX+N4qOtjac4X7g0belC3kZi7dVREPfiLojhelrA3VV0DKjlFvv8swlX
fx9NhSEIbN0ox5uo5/DnvPRmiz81MHGOQ2u+YfZ0j52FhHDWRDirjxMl8xQW3Ddh
MJc/a9cNWwqzY4yq7/trNBjdUkuOE46LIlXJ1PhtxkBXJkEdQA/Z9Odcixn1XBqL
KB3F7Q0xAoGBAOIjwE8B68tF7zsbm1PW49E9A8cdOdvG885/4inAh91FYZSMtKGR
sGOfnN+Ha7TctsbiyjdWLEJd8CLAr4UKh4KbBkyZXkY8eKyjcnuQqn5hm63dqkwN
+hsv/SmO6htP3g2EC6QFMxaduGExT+e/HhGHIDBTmd0BQE5Hd4nBvtQrAoGBAPx5
4H9+pA8hMK+Ql0+M3YwnI81jTzDuWJfvDP4cZcrwLvf6Z2SaXlFTVO+d/00iitx3
glS4N3m5WOBd6lhCpfrPSI8rxqeLkZcwdD8v7dCb10+gK6noFevj+IOq3UxwwyH0
epEVFdZ1mbgf6DfRyArqWhT53UYD9ZSkDffbkW6FAoGBAJ444W7mKzKYhd/XWwB4
FAHsLN595mONejx7YaRQ3z7EMpgbMq7xHnc10C7ds1BiNUhGmbHKC0GMNF48bxIo
4dNR4EBr9ngyC0TPP2SRPZkbdi9aLrLz/JBVLU6MfeQKJ5VRVEu4j5w9Uio+tGez
Yrhk0PK/K6JkI7ghbNPnyTrtAoGBAMbuaPtMF4xsRGYw8WgWwAHMXSNZ2m3dfyTH
kF8wlOwf74IoZQsZrrM5i7T5ss1eKDeqWqDSPbPFXMf8d8dvTESgyrU0cuRUzjRo
U0/uPd2ezTnKJF1Npugkyg1EtfWi6713WpOyH3DJXIN9cIV637nqCWx5q+Wc/QVP
dkoTUTXZAoGAHFPQ/3VF7GBQAK0W2MDzl4GYyoVK+p3cghixywS1WEx0NuYHlB2A
LJbln+kDDTHSlZpKBg0jKQ6WHOrv4wQbhh34GLZoHVbErefWaqgWNPz6umdNY1aE
SJXkLBzCU8AEq5+QEcqX+8TGpz3J4GZx5Y/Rr8WFcze9z56chmh8blU=
-----END RSA PRIVATE KEY-----

View File

@ -3446,7 +3446,8 @@ def asset_poll_status_view(request, asset_id):
except Asset.DoesNotExist: except Asset.DoesNotExist:
return Response({'error': '素材不存在'}, status=status.HTTP_404_NOT_FOUND) return Response({'error': '素材不存在'}, status=status.HTTP_404_NOT_FOUND)
if asset.remote_asset_id: # 已经 active 且有 URL 的素材跳过远程查询(避免跨项目素材被误删)
if asset.remote_asset_id and asset.status != 'active':
from utils import assets_client from utils import assets_client
from utils.assets_client import AssetsAPIError from utils.assets_client import AssetsAPIError
try: try:

View File

@ -0,0 +1,430 @@
# HTTPS 跳转 & 证书生成流程分析
> **本文档面向 AI Agent / 开发者**:总结了在 K3s + Traefik v3 + cert-manager 架构下,实现 HTTP→HTTPS 自动跳转和 Let's Encrypt 自动证书的完整方案。其他项目可直接参照本文档修改自己的 CI/CD 流水线和 K8s 配置。
---
## 零、其他项目接入指南(快速参考)
### 需要做的 3 件事
#### 1. 新增文件:`k8s/cert-manager-issuer.yaml`
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: airlabsv001@gmail.com
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: traefik
```
> 如果集群已有同名 ClusterIssuer多项目共享同一集群这一步可跳过`kubectl apply` 是幂等的。
#### 2. 新增文件:`k8s/redirect-https-middleware.yaml`
```yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-https
spec:
redirectScheme:
scheme: https
permanent: true
```
#### 3. 修改 `k8s/ingress.yaml`
确保包含以下 3 个 annotation 和 TLS 配置:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: 你的-ingress-名称
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt-prod" # ← 触发自动证书签发
traefik.ingress.kubernetes.io/router.middlewares: "default-redirect-https@kubernetescrd" # ← HTTP→HTTPS 跳转
spec:
tls:
- hosts:
- 你的域名-api.example.com # ← 改成你的域名
- 你的域名.example.com
secretName: 你的项目-tls # ← 证书存储的 Secret 名,随便起,不要和其他项目冲突
rules:
- host: 你的域名-api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 你的-backend-service
port:
number: 8000
- host: 你的域名.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 你的-web-service
port:
number: 80
```
#### 4. 修改 CI/CD 流水线deploy.yaml
`kubectl apply` 部署步骤中,**在 ingress.yaml 之前**加上这两行:
```yaml
# 原来只有这些:
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/web-deployment.yaml
kubectl apply -f k8s/ingress.yaml
# 改成:
kubectl apply -f k8s/cert-manager-issuer.yaml # ← 新增:注册 Let's Encrypt CA
kubectl apply -f k8s/redirect-https-middleware.yaml # ← 新增HTTP→HTTPS 重定向中间件
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/web-deployment.yaml
kubectl apply -f k8s/ingress.yaml
```
> **顺序很重要**cert-manager-issuer 和 middleware 必须在 ingress 之前 apply否则 ingress 引用的资源不存在会导致证书签发失败或重定向不生效。
### 集群前置条件(每台服务器只需执行一次)
以下命令需要 **SSH 到每台 K8s master 节点手动执行一次**,不需要写进 CI/CD
```bash
# 1. 确认 cert-manager 已安装
kubectl get pods -n cert-manager
# 如果没有需要先安装https://cert-manager.io/docs/installation/
# 2. 配置 Traefik 全局 HTTP→HTTPS 重定向
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
> **关键**`to=:443` 不能写成 `to=websecure`。Traefik 内部 websecure 端口是 8443`websecure` 会导致重定向 URL 带 `:8443`,用户无法访问。
### 验证清单
```bash
# HTTP 跳转
curl -I http://你的域名
# 预期: 308 Permanent Redirect → https://你的域名
# 证书有效
curl -v https://你的域名 2>&1 | grep "issuer"
# 预期: issuer: ... Let's Encrypt ...
# 证书状态
kubectl get certificate -A
# 预期: Ready = True
```
---
## 一、HTTP → HTTPS 自动跳转
### 问题
用户通过 `http://` 访问时不会自动跳转到 `https://`
### 根因
Traefik v3K3s 内置 Ingress Controller对配置了 TLS 的 Ingress 默认只创建 HTTPS 路由HTTP 请求没有对应路由处理,导致无法重定向。
### 修复方案
在 Traefik Deployment 全局添加 HTTP→HTTPS 重定向参数(无需每个 Ingress 单独配置,集群内所有项目自动生效):
```
--entryPoints.web.http.redirections.entryPoint.to=:443
--entryPoints.web.http.redirections.entryPoint.scheme=https
--entryPoints.web.http.redirections.entryPoint.permanent=true
```
**执行命令**(在 K8s master 节点):
```bash
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
> **注意**: `to=:443` 而不是 `to=websecure`。Traefik 内部 websecure 监听在 8443 端口,如果写 `to=websecure` 重定向 URL 会带上 `:8443` 端口号,导致用户访问失败。写 `:443` 可以确保重定向目标是标准 HTTPS 端口。
### 测试服状态
已修复 ✅ — `http://airflow-studio.test.airlabs.art` → 308 → `https://airflow-studio.test.airlabs.art`
### 正式服状态
未修复 ❌ — 需要在正式服 K8s 集群执行同样的 `kubectl patch` 命令。
---
## 二、SSL 证书生成流程
### 整体架构
```
用户浏览器
┌────▼────┐
│ DNS │ *.airlabs.art → 集群外网 IP
└────┬────┘
┌─────────▼──────────┐
│ Traefik (K3s) │ Ingress Controller
│ Port 80 / 443 │
└─────────┬──────────┘
┌───────────▼────────────┐
│ Ingress 资源 │ 定义域名 → Service 映射
│ + TLS secretName │ 指定证书存储位置
│ + cert-manager注解 │ 触发自动证书签发
└───────────┬────────────┘
┌───────────▼────────────┐
│ cert-manager │ 监听 Ingress 变化
│ (集群内 Pod) │ 自动管理证书生命周期
└───────────┬────────────┘
┌───────────▼────────────┐
│ Let's Encrypt │ 免费证书颁发机构 (CA)
│ (外部服务) │ 通过 ACME 协议验证域名
└────────────────────────┘
```
### 详细步骤
#### 第 1 步ClusterIssuer 定义 CA 配置
文件: `k8s/cert-manager-issuer.yaml`
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory # Let's Encrypt 生产 API
email: airlabsv001@gmail.com # 证书到期提醒邮箱
privateKeySecretRef:
name: letsencrypt-prod-key # ACME 账号私钥存储
solvers:
- http01:
ingress:
class: traefik # 使用 Traefik 完成验证
```
- `ClusterIssuer` 是全局资源,集群内所有 namespace 都可使用
- ACME 账号注册后私钥保存在 `letsencrypt-prod-key` Secret 中
#### 第 2 步Ingress 触发证书签发
文件: `k8s/ingress.yaml`
```yaml
metadata:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod" # ← 告诉 cert-manager 用哪个 Issuer
spec:
tls:
- hosts:
- airflow-studio-api.airlabs.art # ← 需要证书的域名
- airflow-studio.airlabs.art
secretName: airflow-studio-tls # ← 证书存到这个 Secret
```
当 cert-manager 检测到这个 Ingress 有 `cert-manager.io/cluster-issuer` 注解,会自动:
1. 创建一个 `Certificate` 资源
2. 创建一个 `CertificateRequest` 资源
3. 创建一个 `Order` 资源
4. 创建一个 `Challenge` 资源(每个域名一个)
#### 第 3 步HTTP-01 验证(关键环节)
cert-manager 使用 **HTTP-01 验证**来证明你拥有该域名:
```
Let's Encrypt 服务器 你的集群
│ │
│ 1. 给你一个 token │
│ ──────────────────────────────────────────► │
│ │
│ 2. 在 http://<域名>/.well-known/ │
│ acme-challenge/<token> 放置响应 │
│ │ cert-manager 自动创建
│ 3. Let's Encrypt 访问该 URL 验证 │ 临时 Ingress 路由
│ ──────────────────────────────────────────► │ 处理这个路径
│ │
│ 4. 验证通过,签发证书 │
│ ◄────────────────────────────────────────── │
```
**验证成功的前提条件**
| 条件 | 说明 |
|------|------|
| DNS 解析正确 | 域名必须指向集群的外网 IP |
| 80 端口开放 | Let's Encrypt 只通过 HTTP 80 端口验证 |
| Traefik 正常运行 | 需要处理 `/.well-known/acme-challenge/` 请求 |
| cert-manager 已安装 | 集群内必须有 cert-manager Pod 在运行 |
| 无防火墙拦截 | 安全组/防火墙不能阻断 Let's Encrypt 到 80 端口的访问 |
#### 第 4 步:证书存储与使用
验证通过后:
- cert-manager 将证书和私钥存入 Secret `airflow-studio-tls`
- `tls.crt` — 证书链(服务器证书 + 中间证书)
- `tls.key` — 私钥
- Traefik 自动读取该 Secret用于 HTTPS 握手
#### 第 5 步:自动续期
- Let's Encrypt 证书有效期 **90 天**
- cert-manager 在到期前 **30 天**自动续期(`renewalTime`
- 续期过程与首次签发相同HTTP-01 验证)
---
## 三、正式服 HTTPS "不安全" 排查
### 当前正式服证书状态(从外部检测)
```
Subject: CN=airflow-studio-api.airlabs.art
Issuer: C=US, O=Let's Encrypt, CN=R13
Valid: 2026-04-04 ~ 2026-07-03
SAN: airflow-studio-api.airlabs.art, airflow-studio.airlabs.art
Chain: 完整 (R13 → ISRG Root X1)
Verify: return:1 (通过)
```
**证书本身是有效的。** 从 openssl 命令行验证完全通过。
### 浏览器提示"不安全"的可能原因
#### 原因 1正式服 HTTP 80 端口未跳转 HTTPS最可能
```bash
# 测试结果
curl http://airflow-studio.airlabs.art/login → HTTP 200直接返回页面没有跳转
```
正式服 80 端口直接返回了页面内容(通过 nginx浏览器地址栏显示 `http://` 时会标记为"不安全"。这不是证书问题,而是**用户没有被引导到 HTTPS**。
**解决**: 在正式服集群执行同样的 Traefik redirect patch 命令(见第一节)。
#### 原因 2HSTS 头未设置
即使有了跳转,首次访问仍走 HTTP。添加 HSTS 头可以让浏览器记住始终用 HTTPS
`web/nginx.conf` 中添加(仅在 Traefik 终结 TLS 的情况下由后端设置无效,需在 Ingress 层设置):
```yaml
# ingress.yaml annotation
traefik.ingress.kubernetes.io/router.middlewares: "default-hsts@kubernetescrd"
```
或创建 HSTS Middleware
```yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: hsts
spec:
headers:
stsSeconds: 31536000
stsIncludeSubdomains: true
stsPreload: true
```
#### 原因 3混合内容 (Mixed Content)
页面通过 HTTPS 加载但其中某些资源图片、API、JS通过 HTTP 加载。
- 前端源码已检查:**无 `http://` 硬编码** ✅
- 可能来源:数据库中存储的视频/图片 URL 是 `http://` 开头
- 排查:在浏览器 F12 → Console 查看是否有 "Mixed Content" 警告
#### 原因 4cert-manager 未部署到正式服集群
正式服和测试服是**不同的 K8s 集群**。需要确认正式服集群也安装了 cert-manager
```bash
kubectl get pods -n cert-manager
```
如果没有安装证书不会自动签发Traefik 会使用自签证书(浏览器会报不安全)。
---
## 四、测试服 vs 正式服对比排查表
| 检查项 | 测试服 | 正式服 | 检查命令 |
|--------|--------|--------|----------|
| cert-manager 运行 | ✅ | ❓ 待确认 | `kubectl get pods -n cert-manager` |
| ClusterIssuer 存在 | ✅ | ❓ 待确认 | `kubectl get clusterissuer` |
| Certificate Ready | ✅ Ready | ❓ 待确认 | `kubectl get certificate -A` |
| TLS Secret 存在 | ✅ | ❓ 待确认 | `kubectl get secret airflow-studio-tls` |
| 证书链完整 | ✅ Let's Encrypt | ✅ Let's Encrypt | `openssl s_client -connect <domain>:443` |
| HTTP→HTTPS 跳转 | ✅ 308 | ❌ 返回 200 | `curl -I http://<domain>` |
| Traefik redirect 配置 | ✅ 已配置 | ❌ 未配置 | `kubectl get deploy traefik -n kube-system -o yaml` |
| 80 端口外网可达 | ✅ | ✅ | `curl http://<domain>` |
| 443 端口外网可达 | ✅ | ✅ | `curl -k https://<domain>` |
| 前端混合内容 | ✅ 无 | ❓ 待确认 | 浏览器 F12 Console |
---
## 五、正式服修复操作清单
### 步骤 1SSH 到正式服 K8s master 节点
### 步骤 2检查 cert-manager
```bash
kubectl get pods -n cert-manager
kubectl get clusterissuer
kubectl get certificate -A
kubectl describe certificate airflow-studio-tls
```
### 步骤 3如果证书状态异常删除重签
```bash
kubectl delete secret airflow-studio-tls
# cert-manager 会自动重新签发(需要 1-3 分钟)
kubectl get certificate -A -w # 等待 Ready=True
```
### 步骤 4配置 HTTP→HTTPS 全局跳转
```bash
kubectl -n kube-system patch deployment traefik --type=json -p '[
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.to=:443"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.scheme=https"},
{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--entryPoints.web.http.redirections.entryPoint.permanent=true"}
]'
```
### 步骤 5验证
```bash
# HTTP 跳转
curl -I http://airflow-studio.airlabs.art/login
# 预期: 308 → https://airflow-studio.airlabs.art/login
# HTTPS 证书
curl -v https://airflow-studio.airlabs.art/login 2>&1 | grep -E "SSL|subject|issuer"
```

View File

@ -5,6 +5,7 @@ metadata:
annotations: annotations:
kubernetes.io/ingress.class: "traefik" kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt-prod" cert-manager.io/cluster-issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/router.middlewares: "default-redirect-https@kubernetescrd"
spec: spec:
tls: tls:
- hosts: - hosts:

View File

@ -0,0 +1,8 @@
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-https
spec:
redirectScheme:
scheme: https
permanent: true

View File

@ -0,0 +1,428 @@
#!/usr/bin/env python3
"""
幂等同步 SQL dump 测试库 MySQL只插入真正缺失的数据
多次运行安全所有表按业务唯一键去重
- user : username 去重冲突则 old_uid 现有 new_uid 合并
- assetgroup : remote_group_id 去重
- asset : remote_asset_id 去重空则按 (group_id, name)
- generationrecord : task_id 去重
- loginrecord : (user_id, created_at, ip_address) 去重
- loginanomaly : (user_id, login_record_id, rule, created_at) 去重
- activesession : session_id 去重
- adminauditlog : (operator_id, action, target_id, created_at) 去重
执行完后按本次新插入的生成记录增量更新 team total_seconds_used / total_spent / balance
用法:
python3 idempotent_sync.py # dry-run
python3 idempotent_sync.py --commit # 写入
"""
import re, sys
from decimal import Decimal
import pymysql
import pymysql.cursors
SOURCE = '/Users/maidong/Desktop/zyc/研究openclaw/视频生成平台/jimeng-clone/数据库备份/video_auto_原19-55.sql'
TARGET_TEAMS = (3, 4, 12)
DB_TEST = dict(host='mysql-8351f937d637-public.rds.volces.com', port=3306,
user='zyc', password='Zyc188208', database='video_auto', charset='utf8mb4',
autocommit=False, cursorclass=pymysql.cursors.DictCursor)
DB_PROD = dict(host='mysql-d9bb4e81696d-public.rds.volces.com', port=3306,
user='zyc', password='Zyc188208', database='video_auto', charset='utf8mb4',
autocommit=False, cursorclass=pymysql.cursors.DictCursor)
# ---------- SQL dump 解析 ----------
def split_values(s):
vals, cur, in_str, i = [], '', False, 0
while i < len(s):
c = s[i]
if in_str:
if c == '\\' and i+1 < len(s):
cur += c + s[i+1]; i += 2; continue
if c == "'": in_str = False
cur += c
else:
if c == "'": in_str = True; cur += c
elif c == ',': vals.append(cur.strip()); cur = ''
else: cur += c
i += 1
vals.append(cur.strip())
return vals
def parse_table(tbl):
rows = []
with open(SOURCE, 'r', encoding='utf-8') as f:
for line in f:
if not line.startswith(f'INSERT INTO `{tbl}`'): continue
m = re.search(r'VALUES \((.*)\);\s*$', line)
if not m: continue
rows.append(split_values(m.group(1)))
return rows
def unq(v):
if v == 'NULL': return None
if v.startswith("'") and v.endswith("'"):
return (v[1:-1].replace("\\'", "'").replace('\\"', '"').replace('\\\\', '\\')
.replace('\\n', '\n').replace('\\r', '\r').replace('\\t', '\t').replace('\\0', '\0'))
return v
# ---------- 源数据 → dict 形式 ----------
def row_to_dict(row, cols):
return {c: unq(v) for c, v in zip(cols, row)}
USER_COLS = ['id','password','last_login','is_superuser','username','first_name','last_name',
'is_staff','is_active','date_joined','email','created_at','updated_at',
'daily_seconds_limit','monthly_seconds_limit','is_team_admin','team_id',
'must_change_password','disabled_by','daily_generation_limit',
'monthly_generation_limit','spending_limit','last_read_announcement','is_team_owner']
AG_COLS = ['id','remote_group_id','name','description','thumbnail_url','created_at','created_by_id','team_id']
ASSET_COLS_SRC = ['id','remote_asset_id','name','url','status','error_message','created_at','group_id']
# 测试库 asset 多 asset_type,duration,thumbnail_url — 插入时补
GEN_COLS = ['id','task_id','prompt','mode','model','aspect_ratio','duration','status','created_at',
'user_id','seconds_consumed','ark_task_id','error_message','reference_urls','result_url',
'base_cost_amount','cost_amount','frozen_amount','resolution','tokens_consumed',
'is_favorited','seed','completed_at','raw_error','updated_at','is_deleted']
LR_COLS = ['id','ip_address','user_agent','created_at','user_id','geo_city','geo_country',
'geo_province','geo_source','team_id']
LA_COLS = ['id','level','rule','detail','alerted','auto_disabled','disabled_target','created_at',
'login_record_id','team_id','user_id']
AS_COLS = ['id','session_id','device_type','user_agent','created_at','user_id']
AL_COLS = ['id','operator_name','action','target_type','target_id','target_name','before','after',
'ip_address','created_at','operator_id']
def main():
commit = '--commit' in sys.argv
use_prod = '--prod' in sys.argv
DB = DB_PROD if use_prod else DB_TEST
target_name = '【正式服】' if use_prod else '【测试服】'
print(f'\n🎯 目标: {target_name} {DB["host"]}')
if use_prod and commit:
print('⚠️ 正在写入正式服!')
# ===== 解析源 =====
print('解析源 SQL...')
src_users_all = [row_to_dict(r, USER_COLS) for r in parse_table('accounts_user')]
src_ags_all = [row_to_dict(r, AG_COLS) for r in parse_table('generation_assetgroup')]
src_assets_all = [row_to_dict(r, ASSET_COLS_SRC) for r in parse_table('generation_asset')]
src_gens_all = [row_to_dict(r, GEN_COLS) for r in parse_table('generation_generationrecord')]
src_lrs_all = [row_to_dict(r, LR_COLS) for r in parse_table('accounts_loginrecord')]
src_las_all = [row_to_dict(r, LA_COLS) for r in parse_table('accounts_loginanomaly')]
src_ases_all = [row_to_dict(r, AS_COLS) for r in parse_table('accounts_activesession')]
src_als_all = [row_to_dict(r, AL_COLS) for r in parse_table('accounts_adminauditlog')]
# 只处理目标团队的用户
src_team_users = [u for u in src_users_all if str(u['team_id']) in tuple(str(t) for t in TARGET_TEAMS)]
src_uid_set = {int(u['id']) for u in src_team_users}
src_uname_set = {u['username'] for u in src_team_users}
# 源里目标团队相关的数据
src_ags = [g for g in src_ags_all if str(g['team_id']) in tuple(str(t) for t in TARGET_TEAMS)]
src_ag_ids = {int(g['id']) for g in src_ags}
src_assets = [a for a in src_assets_all if int(a['group_id']) in src_ag_ids]
src_gens = [g for g in src_gens_all if int(g['user_id']) in src_uid_set]
src_lrs = [r for r in src_lrs_all if int(r['user_id']) in src_uid_set]
src_las = [a for a in src_las_all if int(a['user_id']) in src_uid_set]
src_ases = [s for s in src_ases_all if int(s['user_id']) in src_uid_set]
src_als = [a for a in src_als_all if a['operator_id'] is not None and int(a['operator_id']) in src_uid_set]
# ===== 连测试库 =====
print('连接测试库...')
conn = pymysql.connect(**DB)
cur = conn.cursor()
try:
cur.execute('SET FOREIGN_KEY_CHECKS = 0')
# ---------- 1. user按 username 去重 ----------
print('\n[1/8] accounts_user')
ph = ','.join(['%s']*len(src_uname_set))
cur.execute(f"SELECT id, username FROM accounts_user WHERE username IN ({ph})", list(src_uname_set))
existing_by_uname = {r['username']: r['id'] for r in cur.fetchall()}
uid_map = {} # old_uid → new_uid
user_inserts = 0
for u in src_team_users:
old_uid = int(u['id'])
if u['username'] in existing_by_uname:
uid_map[old_uid] = existing_by_uname[u['username']]
continue
# 插入新用户AUTO id
insert_cols = [c for c in USER_COLS if c != 'id']
insert_vals = [u[c] for c in insert_cols]
cur.execute(
f"INSERT INTO `accounts_user` ({','.join('`'+c+'`' for c in insert_cols)}) "
f"VALUES ({','.join(['%s']*len(insert_cols))})",
insert_vals
)
new_uid = cur.lastrowid
uid_map[old_uid] = new_uid
user_inserts += 1
print(f' 新用户: {u["username"]} (old {old_uid} → new {new_uid})')
print(f' 新增 {user_inserts} 用户,映射 {len(uid_map)}')
# ---------- 2. assetgroup按 remote_group_id 去重 ----------
print('\n[2/8] generation_assetgroup')
src_rgids = [g['remote_group_id'] for g in src_ags if g['remote_group_id']]
if src_rgids:
ph = ','.join(['%s']*len(src_rgids))
cur.execute(f"SELECT id, remote_group_id FROM generation_assetgroup WHERE remote_group_id IN ({ph})", src_rgids)
existing_by_rgid = {r['remote_group_id']: r['id'] for r in cur.fetchall()}
else:
existing_by_rgid = {}
ag_map = {} # old_ag_id → new_ag_id
ag_inserts = 0
for g in src_ags:
old_id = int(g['id'])
rgid = g['remote_group_id']
if rgid and rgid in existing_by_rgid:
ag_map[old_id] = existing_by_rgid[rgid]
continue
insert_cols = [c for c in AG_COLS if c != 'id']
vals = []
for c in insert_cols:
v = g[c]
if c == 'created_by_id' and v is not None:
ov = int(v)
v = uid_map.get(ov, ov) # 可能 created_by 是两团队之外的用户,直接保留
vals.append(v)
cur.execute(
f"INSERT INTO `generation_assetgroup` ({','.join('`'+c+'`' for c in insert_cols)}) "
f"VALUES ({','.join(['%s']*len(insert_cols))})",
vals
)
ag_map[old_id] = cur.lastrowid
ag_inserts += 1
print(f' 新增 {ag_inserts} assetgroup映射 {len(ag_map)}')
# ---------- 3. asset按 remote_asset_id 去重,无 remote_asset_id 按 (group_id, name) ----------
print('\n[3/8] generation_asset')
# 已有 remote_asset_id 集合
cur.execute("SELECT remote_asset_id FROM generation_asset WHERE remote_asset_id != ''")
existing_raids = {r['remote_asset_id'] for r in cur.fetchall()}
# 已有 (group_id, name) 组合(当 remote_asset_id 为空)
cur.execute("SELECT group_id, name FROM generation_asset WHERE remote_asset_id = ''")
existing_namekeys = {(r['group_id'], r['name']) for r in cur.fetchall()}
asset_inserts = 0
VIDEO_EXT = ('.mp4', '.mov', '.avi', '.webm', '.mkv', '.m4v')
AUDIO_EXT = ('.mp3', '.wav', '.m4a', '.aac', '.flac', '.ogg')
for a in src_assets:
new_gid = ag_map[int(a['group_id'])]
raid = a['remote_asset_id']
key = (new_gid, a['name'])
if raid and raid in existing_raids:
continue
if not raid and key in existing_namekeys:
continue
# 推断 asset_type测试库 NOT NULL
url_l = (a['url'] or '').lower()
if any(e in url_l for e in VIDEO_EXT):
atype = 'Video'
elif any(e in url_l for e in AUDIO_EXT):
atype = 'Audio'
else:
atype = 'Image'
cur.execute(
"""INSERT INTO generation_asset (remote_asset_id,name,url,status,error_message,created_at,
group_id,asset_type,duration,thumbnail_url) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)""",
(raid or '', a['name'], a['url'], a['status'], a['error_message'], a['created_at'],
new_gid, atype, None, '')
)
asset_inserts += 1
print(f' 新增 {asset_inserts} asset')
# ---------- 4. generationrecord按 task_id 去重 ----------
print('\n[4/8] generation_generationrecord')
src_tids = [g['task_id'] for g in src_gens]
if src_tids:
ph = ','.join(['%s']*len(src_tids))
cur.execute(f"SELECT task_id FROM generation_generationrecord WHERE task_id IN ({ph})", src_tids)
existing_tids = {r['task_id'] for r in cur.fetchall()}
else:
existing_tids = set()
gen_inserts_by_team = {t: [] for t in TARGET_TEAMS} # 用于最后 team 字段重算
gen_insert_cols = [c for c in GEN_COLS if c != 'id'] + ['thumbnail_url']
for g in src_gens:
if g['task_id'] in existing_tids: continue
new_uid = uid_map[int(g['user_id'])]
vals = [g[c] if c != 'user_id' else new_uid for c in GEN_COLS if c != 'id'] + ['']
cur.execute(
f"INSERT INTO generation_generationrecord "
f"({','.join('`'+c+'`' for c in gen_insert_cols)}) "
f"VALUES ({','.join(['%s']*len(gen_insert_cols))})",
vals
)
# 分流到所属 team根据源用户的 team_id
src_uid = int(g['user_id'])
src_user = next(u for u in src_team_users if int(u['id']) == src_uid)
tid = int(src_user['team_id'])
gen_inserts_by_team[tid].append(g)
parts = ', '.join(f'team{t}={len(gen_inserts_by_team[t])}' for t in TARGET_TEAMS)
total = sum(len(v) for v in gen_inserts_by_team.values())
print(f' 新增 {total} generationrecord ({parts})')
# ---------- 5. loginrecord按 (user_id, created_at, ip) 去重 ----------
print('\n[5/8] accounts_loginrecord')
# 一次查出相关 user_id 的所有 loginrecord
mapped_uids = set(uid_map.values())
if mapped_uids:
ph = ','.join(['%s']*len(mapped_uids))
cur.execute(f"""SELECT id, user_id, created_at, ip_address FROM accounts_loginrecord
WHERE user_id IN ({ph})""", list(mapped_uids))
existing_lr = {(r['user_id'], r['created_at'], r['ip_address']): r['id'] for r in cur.fetchall()}
else:
existing_lr = {}
lr_map = {} # old_lr_id → new_lr_id (本次插入的)
lr_inserts = 0
lr_insert_cols = [c for c in LR_COLS if c != 'id']
for r in src_lrs:
new_uid = uid_map[int(r['user_id'])]
# 解析 created_at → datetime
from datetime import datetime
ca = r['created_at']
if isinstance(ca, str):
try: ca_dt = datetime.strptime(ca, '%Y-%m-%d %H:%M:%S.%f')
except ValueError: ca_dt = datetime.strptime(ca, '%Y-%m-%d %H:%M:%S')
else:
ca_dt = ca
key = (new_uid, ca_dt, r['ip_address'])
if key in existing_lr:
lr_map[int(r['id'])] = existing_lr[key]
continue
vals = [r[c] if c != 'user_id' else new_uid for c in lr_insert_cols]
cur.execute(
f"INSERT INTO accounts_loginrecord ({','.join('`'+c+'`' for c in lr_insert_cols)}) "
f"VALUES ({','.join(['%s']*len(lr_insert_cols))})",
vals
)
lr_map[int(r['id'])] = cur.lastrowid
lr_inserts += 1
print(f' 新增 {lr_inserts} loginrecord')
# ---------- 6. loginanomaly按 (user_id, login_record_id, rule, created_at) ----------
print('\n[6/8] accounts_loginanomaly')
la_inserts = 0
la_insert_cols = [c for c in LA_COLS if c != 'id']
for a in src_las:
new_uid = uid_map[int(a['user_id'])]
old_lr_id = int(a['login_record_id'])
if old_lr_id not in lr_map:
# login_record 可能不在源抽出范围(跨团队),跳过
continue
new_lr_id = lr_map[old_lr_id]
cur.execute("""SELECT 1 FROM accounts_loginanomaly
WHERE user_id=%s AND login_record_id=%s AND rule=%s AND created_at=%s""",
(new_uid, new_lr_id, a['rule'], a['created_at']))
if cur.fetchone(): continue
vals = []
for c in la_insert_cols:
if c == 'user_id': vals.append(new_uid)
elif c == 'login_record_id': vals.append(new_lr_id)
else: vals.append(a[c])
cur.execute(
f"INSERT INTO accounts_loginanomaly ({','.join('`'+c+'`' for c in la_insert_cols)}) "
f"VALUES ({','.join(['%s']*len(la_insert_cols))})",
vals
)
la_inserts += 1
print(f' 新增 {la_inserts} loginanomaly')
# ---------- 7. activesession按 session_id 去重 ----------
print('\n[7/8] accounts_activesession')
src_sids = [s['session_id'] for s in src_ases]
if src_sids:
ph = ','.join(['%s']*len(src_sids))
cur.execute(f"SELECT session_id FROM accounts_activesession WHERE session_id IN ({ph})", src_sids)
existing_sids = {r['session_id'] for r in cur.fetchall()}
else:
existing_sids = set()
as_inserts = 0
as_insert_cols = [c for c in AS_COLS if c != 'id']
for s in src_ases:
if s['session_id'] in existing_sids: continue
new_uid = uid_map[int(s['user_id'])]
vals = [s[c] if c != 'user_id' else new_uid for c in as_insert_cols]
cur.execute(
f"INSERT INTO accounts_activesession ({','.join('`'+c+'`' for c in as_insert_cols)}) "
f"VALUES ({','.join(['%s']*len(as_insert_cols))})",
vals
)
as_inserts += 1
print(f' 新增 {as_inserts} activesession')
# ---------- 8. adminauditlog按 (operator_id, action, target_id, created_at) ----------
print('\n[8/8] accounts_adminauditlog')
al_inserts = 0
al_insert_cols = [c for c in AL_COLS if c != 'id']
for a in src_als:
op_id = int(a['operator_id'])
new_op_id = uid_map.get(op_id, op_id)
tgt = int(a['target_id']) if a['target_id'] else None
new_tgt = uid_map.get(tgt, tgt) if tgt else None
cur.execute("""SELECT 1 FROM accounts_adminauditlog
WHERE operator_id=%s AND action=%s AND
(target_id=%s OR (target_id IS NULL AND %s IS NULL))
AND created_at=%s""",
(new_op_id, a['action'], new_tgt, new_tgt, a['created_at']))
if cur.fetchone(): continue
vals = []
for c in al_insert_cols:
if c == 'operator_id': vals.append(new_op_id)
elif c == 'target_id': vals.append(new_tgt)
else: vals.append(a[c])
cur.execute(
f"INSERT INTO accounts_adminauditlog ({','.join('`'+c+'`' for c in al_insert_cols)}) "
f"VALUES ({','.join(['%s']*len(al_insert_cols))})",
vals
)
al_inserts += 1
print(f' 新增 {al_inserts} adminauditlog')
# ---------- 重算 team 统计 ----------
print('\n[重算 team 统计]')
for tid in TARGET_TEAMS:
gens_added = gen_inserts_by_team[tid]
if not gens_added:
print(f' Team {tid}: 无新增生成记录,跳过')
continue
sec_delta = sum(Decimal(g['seconds_consumed']) for g in gens_added)
cost_delta = sum(Decimal(g['cost_amount']) for g in gens_added)
cur.execute("""UPDATE accounts_team SET
total_seconds_used = total_seconds_used + %s,
total_spent = total_spent + %s,
balance = balance - %s
WHERE id=%s""",
(sec_delta, cost_delta, cost_delta, tid))
print(f' Team {tid}: +seconds={sec_delta} +spent={cost_delta} -balance={cost_delta}')
cur.execute('SET FOREIGN_KEY_CHECKS = 1')
if commit:
conn.commit()
print('\n✅ COMMITTED')
else:
conn.rollback()
print('\n🔎 Rolled back (use --commit to persist)')
except Exception as e:
conn.rollback()
print(f'\n❌ Error: {e}')
raise
finally:
conn.close()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,263 @@
#!/usr/bin/env python3
"""
Sync team 12 (万物苏网络) from production test env.
- Team 4 (洁雯团队) 两边都已存在且用户列表一致不动
- Team 12 (万物苏网络) 测试库不存在从正式服完整拷贝
accounts_team 1 (id=12 保留)
accounts_user 11 (team 12 成员保留源 id)
generation_assetgroup 62 (AUTO id映射 oldnew)
generation_asset N (AUTO idremap group_id)
accounts_loginrecord N (AUTO id)
accounts_loginanomaly N (AUTO id, remap login_record_id)
accounts_activesession N (AUTO id)
accounts_adminauditlog N (operator_id IN team12 users, AUTO id)
generation_generationrecord 440 (AUTO id)
用法
python3 migrate_from_prod.py # dry-run事务回滚
python3 migrate_from_prod.py --commit # 实际写入测试环境
"""
import sys
import pymysql
PROD = dict(
host='mysql-d9bb4e81696d-public.rds.volces.com',
port=3306, user='zyc', password='Zyc188208',
database='video_auto', charset='utf8mb4',
autocommit=False, cursorclass=pymysql.cursors.DictCursor,
)
TEST = dict(
host='mysql-8351f937d637-public.rds.volces.com',
port=3306, user='zyc', password='Zyc188208',
database='video_auto', charset='utf8mb4',
autocommit=False,
)
TEAM_ID = 12
def fetch_all(cur, sql, *params):
cur.execute(sql, params)
return cur.fetchall()
def main():
commit = '--commit' in sys.argv
print('Connecting to PROD (read-only fetch)...')
prod = pymysql.connect(**PROD)
pcur = prod.cursor()
# 1) team
team = fetch_all(pcur, 'SELECT * FROM accounts_team WHERE id=%s', TEAM_ID)
assert len(team) == 1, f'Expected 1 team, got {len(team)}'
team_row = team[0]
# 2) users
users = fetch_all(pcur, 'SELECT * FROM accounts_user WHERE team_id=%s ORDER BY id', TEAM_ID)
user_ids = [u['id'] for u in users]
print(f'team={team_row["name"]} users={len(users)} ids={user_ids}')
# 3) assetgroups
agroups = fetch_all(pcur, 'SELECT * FROM generation_assetgroup WHERE team_id=%s ORDER BY id', TEAM_ID)
group_ids = [g['id'] for g in agroups]
# 4) assets — group_id in agroup set
if group_ids:
ph = ','.join(['%s'] * len(group_ids))
assets = fetch_all(pcur, f'SELECT * FROM generation_asset WHERE group_id IN ({ph}) ORDER BY id', *group_ids)
else:
assets = []
# 5) login records (team_id = TEAM_ID OR user_id IN users)
if user_ids:
ph = ','.join(['%s'] * len(user_ids))
lrs = fetch_all(pcur, f'SELECT * FROM accounts_loginrecord WHERE user_id IN ({ph}) ORDER BY id', *user_ids)
else:
lrs = []
# 6) login anomalies (team_id = TEAM_ID)
las = fetch_all(pcur, 'SELECT * FROM accounts_loginanomaly WHERE team_id=%s ORDER BY id', TEAM_ID)
# 7) active sessions
if user_ids:
ph = ','.join(['%s'] * len(user_ids))
ases = fetch_all(pcur, f'SELECT * FROM accounts_activesession WHERE user_id IN ({ph}) ORDER BY id', *user_ids)
else:
ases = []
# 8) admin audit logs (operator_id in team12 users)
if user_ids:
ph = ','.join(['%s'] * len(user_ids))
als = fetch_all(pcur, f'SELECT * FROM accounts_adminauditlog WHERE operator_id IN ({ph}) ORDER BY id', *user_ids)
else:
als = []
# 9) generation records
if user_ids:
ph = ','.join(['%s'] * len(user_ids))
gens = fetch_all(pcur, f'SELECT * FROM generation_generationrecord WHERE user_id IN ({ph}) ORDER BY id', *user_ids)
else:
gens = []
# 10) team anomaly config
tacs = fetch_all(pcur, 'SELECT * FROM accounts_teamanomalyconfig WHERE team_id=%s', TEAM_ID)
prod.close()
print(f'Fetched: team=1 users={len(users)} assetgroups={len(agroups)} assets={len(assets)} '
f'loginrecords={len(lrs)} loginanomalies={len(las)} activesessions={len(ases)} '
f'adminauditlogs={len(als)} generationrecords={len(gens)} teamanomalyconfig={len(tacs)}')
# --- target test DB schema may have extra fields or be identical; we fetch column list to be safe ---
print('\nConnecting to TEST DB for write...')
test = pymysql.connect(**TEST)
tcur = test.cursor()
def get_test_cols(tbl):
tcur.execute(f"SHOW COLUMNS FROM `{tbl}`")
return [row[0] for row in tcur.fetchall()]
def align_row(src_row, test_cols, overrides=None, drop_id=True):
"""Produce (cols, values) aligned to test schema.
- Drop id if drop_id=True (AUTO_INCREMENT)
- Apply overrides {col: value}
- Fill missing columns with sensible defaults (empty string / NULL)
"""
overrides = overrides or {}
cols, vals = [], []
for c in test_cols:
if drop_id and c == 'id':
continue
if c in overrides:
vals.append(overrides[c])
elif c in src_row:
vals.append(src_row[c])
else:
# new NOT-NULL column in test schema not present in prod — fill empty str
vals.append('')
cols.append(c)
return cols, vals
def ins(tbl, cols, vals):
ph = ','.join(['%s'] * len(cols))
sql = f"INSERT INTO `{tbl}` ({','.join('`'+c+'`' for c in cols)}) VALUES ({ph})"
tcur.execute(sql, vals)
return tcur.lastrowid
try:
tcur.execute('SET FOREIGN_KEY_CHECKS = 0')
# 1) accounts_team — preserve id
print('\n[1/10] accounts_team')
team_cols_test = get_test_cols('accounts_team')
c, v = align_row(team_row, team_cols_test, drop_id=False)
ins('accounts_team', c, v)
print(f' inserted team id={TEAM_ID}')
# 2) accounts_user — preserve id
print('\n[2/10] accounts_user')
user_cols_test = get_test_cols('accounts_user')
for u in users:
c, v = align_row(u, user_cols_test, drop_id=False)
ins('accounts_user', c, v)
print(f' inserted {len(users)} users')
# 3) accounts_teamanomalyconfig
print('\n[3/10] accounts_teamanomalyconfig')
if tacs:
tac_cols_test = get_test_cols('accounts_teamanomalyconfig')
for t in tacs:
c, v = align_row(t, tac_cols_test, drop_id=True)
ins('accounts_teamanomalyconfig', c, v)
print(f' inserted {len(tacs)} rows')
else:
print(' 0 rows')
# 4) generation_assetgroup — AUTO id, keep map
print('\n[4/10] generation_assetgroup')
ag_cols_test = get_test_cols('generation_assetgroup')
ag_map = {}
for g in agroups:
c, v = align_row(g, ag_cols_test, drop_id=True)
new_id = ins('generation_assetgroup', c, v)
ag_map[g['id']] = new_id
print(f' inserted {len(ag_map)} rows')
# 5) generation_asset — AUTO id, remap group_id
print('\n[5/10] generation_asset')
a_cols_test = get_test_cols('generation_asset')
for a in assets:
ov = {'group_id': ag_map[a['group_id']]}
c, v = align_row(a, a_cols_test, overrides=ov, drop_id=True)
ins('generation_asset', c, v)
print(f' inserted {len(assets)} rows')
# 6) accounts_loginrecord — AUTO id, keep map
print('\n[6/10] accounts_loginrecord')
lr_cols_test = get_test_cols('accounts_loginrecord')
lr_map = {}
for lr in lrs:
c, v = align_row(lr, lr_cols_test, drop_id=True)
new_id = ins('accounts_loginrecord', c, v)
lr_map[lr['id']] = new_id
print(f' inserted {len(lr_map)} rows')
# 7) accounts_loginanomaly — AUTO id, remap login_record_id
print('\n[7/10] accounts_loginanomaly')
la_cols_test = get_test_cols('accounts_loginanomaly')
skipped_la = 0
for la in las:
if la['login_record_id'] not in lr_map:
# login_record not fetched (shouldn't happen if schema consistent) → skip
skipped_la += 1
continue
ov = {'login_record_id': lr_map[la['login_record_id']]}
c, v = align_row(la, la_cols_test, overrides=ov, drop_id=True)
ins('accounts_loginanomaly', c, v)
print(f' inserted {len(las)-skipped_la} rows (skipped {skipped_la})')
# 8) accounts_activesession
print('\n[8/10] accounts_activesession')
as_cols_test = get_test_cols('accounts_activesession')
for a in ases:
c, v = align_row(a, as_cols_test, drop_id=True)
ins('accounts_activesession', c, v)
print(f' inserted {len(ases)} rows')
# 9) accounts_adminauditlog
print('\n[9/10] accounts_adminauditlog')
al_cols_test = get_test_cols('accounts_adminauditlog')
for al in als:
c, v = align_row(al, al_cols_test, drop_id=True)
ins('accounts_adminauditlog', c, v)
print(f' inserted {len(als)} rows')
# 10) generation_generationrecord
print('\n[10/10] generation_generationrecord')
g_cols_test = get_test_cols('generation_generationrecord')
for g in gens:
c, v = align_row(g, g_cols_test, drop_id=True)
ins('generation_generationrecord', c, v)
print(f' inserted {len(gens)} rows')
tcur.execute('SET FOREIGN_KEY_CHECKS = 1')
if commit:
test.commit()
print('\n✅ COMMITTED to test DB')
else:
test.rollback()
print('\n🔎 Rolled back (use --commit to persist)')
except Exception as e:
test.rollback()
print(f'\n❌ Error: {e}')
raise
finally:
test.close()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,295 @@
#!/usr/bin/env python3
"""
Migrate two teams (万物苏网络 id=12, 洁雯团队 id=4 增量 user 107) from
source SQL dump 测试库 MySQL (mysql-8351f937d637-public.rds.volces.com).
策略
- team 4 已存在仅新增 user 107 及其关联数据
- team 12 + 9 个用户完全不存在 全量插入
- team/user ID 保留源 ID无冲突
- loginrecord / loginanomaly / assetgroup / asset / generationrecord / activesession / adminauditlog
AUTO_INCREMENT 重新分配 ID维护 oldnew 映射
"""
import re
import sys
import pymysql
SOURCE_SQL = '/Users/maidong/Desktop/zyc/研究openclaw/视频生成平台/jimeng-clone/video_auto迁移2个团队的数据.sql'
DB = dict(
host='mysql-8351f937d637-public.rds.volces.com',
port=3306, user='zyc', password='Zyc188208',
database='video_auto', charset='utf8mb4',
autocommit=False,
)
TEAM4_EXISTING_USERS = {19, 20, 21, 22, 23, 24, 25} # 已存在,不动
INCREMENTAL_USERS = {107, 99, 102, 103, 104, 105, 108, 109, 110, 111}
NEW_TEAMS = {12}
def split_values(s):
vals, cur, in_str, i = [], '', False, 0
while i < len(s):
c = s[i]
if c == "'" and (i == 0 or s[i-1] != '\\'):
in_str = not in_str
cur += c
elif c == ',' and not in_str:
vals.append(cur.strip()); cur = ''
else:
cur += c
i += 1
vals.append(cur.strip())
return vals
def parse_table(path, tbl):
rows = []
with open(path, 'r', encoding='utf-8') as f:
for line in f:
if not line.startswith(f'INSERT INTO `{tbl}`'):
continue
m = re.search(r'VALUES \((.*)\);', line)
if not m: continue
rows.append(split_values(m.group(1)))
return rows
def q(v):
"""Value placeholder helper — we already have MySQL-escaped literal strings
from the dump, so we inject them as-is into the SQL."""
return v
def bulk_insert(cur, tbl, cols, rows_values, label):
"""Insert preserving source id (rows_values includes id as first column).
Returns rowcount."""
if not rows_values:
print(f' [{label}] 0 rows')
return 0
placeholders = ','.join(['%s'] * len(cols))
sql = f"INSERT INTO `{tbl}` ({','.join('`'+c+'`' for c in cols)}) VALUES ({placeholders})"
cur.executemany(sql, rows_values)
print(f' [{label}] inserted {cur.rowcount} rows')
return cur.rowcount
def auto_insert_collect_id(cur, tbl, cols_no_id, rows_vals_no_id, src_ids, label):
"""INSERT rows letting AUTO_INCREMENT assign id.
Uses row-by-row insert to map oldnew deterministically.
cols_no_id: column list without `id`.
rows_vals_no_id: list of tuples matching cols_no_id.
src_ids: list of source ids (same order as rows_vals_no_id).
Returns dict {old_id: new_id}.
"""
mapping = {}
if not rows_vals_no_id:
print(f' [{label}] 0 rows')
return mapping
placeholders = ','.join(['%s'] * len(cols_no_id))
sql = f"INSERT INTO `{tbl}` ({','.join('`'+c+'`' for c in cols_no_id)}) VALUES ({placeholders})"
for old_id, vals in zip(src_ids, rows_vals_no_id):
cur.execute(sql, vals)
mapping[int(old_id)] = cur.lastrowid
print(f' [{label}] inserted {len(mapping)} rows, id range {min(mapping.values())}-{max(mapping.values())}')
return mapping
def unquote(s):
"""Turn a raw SQL literal like 'foo\\'bar' or NULL into Python value."""
s = s.strip()
if s == 'NULL':
return None
if s.startswith("'") and s.endswith("'"):
inner = s[1:-1]
# MySQL dump uses \' for escaped single quote and \\ for backslash
inner = inner.replace("\\'", "'").replace('\\"', '"').replace('\\\\', '\\').replace('\\n', '\n').replace('\\r', '\r').replace('\\t', '\t').replace('\\0', '\0')
return inner
# numeric / boolean
return s
def raw_vals_to_py(vals):
return [unquote(v) for v in vals]
def main():
print(f'Loading source SQL: {SOURCE_SQL}')
# --- parse all needed tables ---
teams_all = parse_table(SOURCE_SQL, 'accounts_team')
users_all = parse_table(SOURCE_SQL, 'accounts_user')
agroups_all = parse_table(SOURCE_SQL, 'generation_assetgroup')
assets_all = parse_table(SOURCE_SQL, 'generation_asset')
lrs_all = parse_table(SOURCE_SQL, 'accounts_loginrecord')
las_all = parse_table(SOURCE_SQL, 'accounts_loginanomaly')
ases_all = parse_table(SOURCE_SQL, 'accounts_activesession')
als_all = parse_table(SOURCE_SQL, 'accounts_adminauditlog')
gens_all = parse_table(SOURCE_SQL, 'generation_generationrecord')
# --- filter ---
teams = [r for r in teams_all if int(r[0]) in NEW_TEAMS]
users = [r for r in users_all if int(r[0]) in INCREMENTAL_USERS]
# assetgroup team_id is at index 7
relevant_groups = [r for r in agroups_all if r[7] in ('4', '12')]
group_ids = {r[0] for r in relevant_groups}
# asset group_id at index 7
assets = [r for r in assets_all if r[7] in group_ids]
# loginrecord user_id at index 4
lrs = [r for r in lrs_all if int(r[4]) in INCREMENTAL_USERS]
lr_ids = {r[0] for r in lrs}
# loginanomaly user_id at index 10; login_record_id at index 8
las = [r for r in las_all if int(r[10]) in INCREMENTAL_USERS and r[8] in lr_ids]
# activesession user_id at index 5
ases = [r for r in ases_all if int(r[5]) in INCREMENTAL_USERS]
# adminauditlog operator_id at last (index 10)
als = [r for r in als_all if r[-1] != 'NULL' and r[-1].isdigit() and int(r[-1]) in INCREMENTAL_USERS]
# generationrecord user_id at index 9
gens = [r for r in gens_all if int(r[9]) in INCREMENTAL_USERS]
print(f'Prepared:')
print(f' teams (new) : {len(teams)}')
print(f' users (incremental): {len(users)}')
print(f' assetgroups (T4+T12): {len(relevant_groups)}')
print(f' assets : {len(assets)}')
print(f' loginrecords : {len(lrs)}')
print(f' loginanomalies : {len(las)}')
print(f' activesessions : {len(ases)}')
print(f' adminauditlogs : {len(als)}')
print(f' generationrecords : {len(gens)}')
if '--dry-run' in sys.argv:
print('\n--dry-run: exiting before DB connect')
return
# --- connect ---
print('\nConnecting to target DB...')
conn = pymysql.connect(**DB)
try:
cur = conn.cursor()
cur.execute('SET FOREIGN_KEY_CHECKS = 0')
# 1) accounts_team (id preserved)
print('\n[1/9] accounts_team')
team_cols = ['id','name','total_seconds_pool','total_seconds_used','monthly_seconds_limit',
'daily_member_limit_default','is_active','created_at','updated_at','disabled_by',
'expected_regions','balance','daily_member_spending_default','frozen_amount',
'markup_percentage','monthly_spending_limit','total_spent','max_concurrent_tasks']
bulk_insert(cur, 'accounts_team', team_cols,
[raw_vals_to_py(r) for r in teams], 'team')
# 2) accounts_user (id preserved)
print('\n[2/9] accounts_user')
user_cols = ['id','password','last_login','is_superuser','username','first_name','last_name',
'is_staff','is_active','date_joined','email','created_at','updated_at',
'daily_seconds_limit','monthly_seconds_limit','is_team_admin','team_id',
'must_change_password','disabled_by','daily_generation_limit',
'monthly_generation_limit','spending_limit','last_read_announcement','is_team_owner']
bulk_insert(cur, 'accounts_user', user_cols,
[raw_vals_to_py(r) for r in users], 'user')
# 3) generation_assetgroup (AUTO id)
print('\n[3/9] generation_assetgroup')
ag_cols = ['remote_group_id','name','description','thumbnail_url','created_at','created_by_id','team_id']
# source schema: id=0, remote_group_id=1, name=2, description=3, thumbnail_url=4, created_at=5, created_by_id=6, team_id=7
ag_src_ids = [r[0] for r in relevant_groups]
ag_vals = [raw_vals_to_py(r[1:]) for r in relevant_groups] # strip id
ag_map = auto_insert_collect_id(cur, 'generation_assetgroup', ag_cols, ag_vals, ag_src_ids, 'assetgroup')
# 4) generation_asset (AUTO id, remap group_id)
# 测试库 schema 多 asset_type(NOT NULL)、duration(NULL)、thumbnail_url(NOT NULL)
# asset_type 按 url 后缀推断thumbnail_url 留空串duration 留 NULL
print('\n[4/9] generation_asset')
a_cols = ['remote_asset_id','name','url','status','error_message','created_at','group_id',
'asset_type','duration','thumbnail_url']
a_src_ids = [r[0] for r in assets]
a_vals = []
VIDEO_EXT = ('.mp4','.mov','.avi','.webm','.mkv','.m4v')
for r in assets:
v = raw_vals_to_py(r[1:]) # index 0..6 = remote_asset_id..group_id
# remap group_id (now at index 6)
v[6] = ag_map[int(r[7])]
url_lower = (v[2] or '').lower()
asset_type = 'video' if any(e in url_lower for e in VIDEO_EXT) else 'image'
v.extend([asset_type, None, '']) # asset_type, duration, thumbnail_url
a_vals.append(v)
auto_insert_collect_id(cur, 'generation_asset', a_cols, a_vals, a_src_ids, 'asset')
# 5) accounts_loginrecord (AUTO id)
print('\n[5/9] accounts_loginrecord')
# source schema: id=0, ip_address=1, user_agent=2, created_at=3, user_id=4, geo_city=5, geo_country=6, geo_province=7, geo_source=8, team_id=9
lr_cols = ['ip_address','user_agent','created_at','user_id','geo_city','geo_country','geo_province','geo_source','team_id']
lr_src_ids = [r[0] for r in lrs]
lr_vals = [raw_vals_to_py(r[1:]) for r in lrs]
lr_map = auto_insert_collect_id(cur, 'accounts_loginrecord', lr_cols, lr_vals, lr_src_ids, 'loginrecord')
# 6) accounts_loginanomaly (AUTO id, remap login_record_id)
print('\n[6/9] accounts_loginanomaly')
# source schema: id=0, level=1, rule=2, detail=3, alerted=4, auto_disabled=5, disabled_target=6, created_at=7, login_record_id=8, team_id=9, user_id=10
la_cols = ['level','rule','detail','alerted','auto_disabled','disabled_target','created_at','login_record_id','team_id','user_id']
la_src_ids = [r[0] for r in las]
la_vals = []
for r in las:
v = raw_vals_to_py(r[1:]) # index 0..9 in slice = level..user_id
# login_record_id is at new-index 7
v[7] = lr_map[int(r[8])]
la_vals.append(v)
auto_insert_collect_id(cur, 'accounts_loginanomaly', la_cols, la_vals, la_src_ids, 'loginanomaly')
# 7) accounts_activesession (AUTO id)
print('\n[7/9] accounts_activesession')
# source schema: id=0, session_id=1, device_type=2, user_agent=3, created_at=4, user_id=5
as_cols = ['session_id','device_type','user_agent','created_at','user_id']
as_src_ids = [r[0] for r in ases]
as_vals = [raw_vals_to_py(r[1:]) for r in ases]
auto_insert_collect_id(cur, 'accounts_activesession', as_cols, as_vals, as_src_ids, 'activesession')
# 8) accounts_adminauditlog (AUTO id)
print('\n[8/9] accounts_adminauditlog')
# source schema: id=0, operator_name=1, action=2, target_type=3, target_id=4, target_name=5,
# before=6, after=7, ip_address=8, created_at=9, operator_id=10
al_cols = ['operator_name','action','target_type','target_id','target_name','before','after','ip_address','created_at','operator_id']
al_src_ids = [r[0] for r in als]
al_vals = [raw_vals_to_py(r[1:]) for r in als]
auto_insert_collect_id(cur, 'accounts_adminauditlog', al_cols, al_vals, al_src_ids, 'adminauditlog')
# 9) generation_generationrecord (AUTO id)
print('\n[9/9] generation_generationrecord')
# source schema: id, task_id, prompt, mode, model, aspect_ratio, duration, status, created_at,
# user_id, seconds_consumed, ark_task_id, error_message, reference_urls, result_url,
# base_cost_amount, cost_amount, frozen_amount, resolution, tokens_consumed,
# is_favorited, seed, completed_at, raw_error, updated_at, is_deleted
# 测试库 schema 多 thumbnail_url(NOT NULL) —— 留空串
g_cols = ['task_id','prompt','mode','model','aspect_ratio','duration','status','created_at',
'user_id','seconds_consumed','ark_task_id','error_message','reference_urls','result_url',
'base_cost_amount','cost_amount','frozen_amount','resolution','tokens_consumed',
'is_favorited','seed','completed_at','raw_error','updated_at','is_deleted',
'thumbnail_url']
g_src_ids = [r[0] for r in gens]
g_vals = []
for r in gens:
v = raw_vals_to_py(r[1:])
v.append('') # thumbnail_url
g_vals.append(v)
auto_insert_collect_id(cur, 'generation_generationrecord', g_cols, g_vals, g_src_ids, 'generationrecord')
cur.execute('SET FOREIGN_KEY_CHECKS = 1')
if '--commit' in sys.argv:
conn.commit()
print('\n✅ COMMITTED')
else:
conn.rollback()
print('\n🔎 Rolled back (rerun with --commit to persist)')
except Exception as e:
conn.rollback()
print(f'\n❌ Error: {e}')
raise
finally:
conn.close()
if __name__ == '__main__':
main()

File diff suppressed because one or more lines are too long

View File

@ -39,9 +39,12 @@ const DownloadIcon = () => (
// Mention tag with thumbnail + hover preview // Mention tag with thumbnail + hover preview
function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?: string; assetType?: string }) { function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?: string; assetType?: string }) {
const [hover, setHover] = useState(false); const [hover, setHover] = useState(false);
const [thumbBroken, setThumbBroken] = useState(false);
const ref = useRef<HTMLSpanElement>(null); const ref = useRef<HTMLSpanElement>(null);
const [pos, setPos] = useState({ top: 0, left: 0 }); const [pos, setPos] = useState({ top: 0, left: 0 });
const isAudio = assetType === 'Audio' || assetType === 'audio'; const isAudio = assetType === 'Audio' || assetType === 'audio';
const isVideo = assetType === 'Video' || assetType === 'video';
const showThumb = thumbUrl && !thumbBroken;
return ( return (
<> <>
@ -49,7 +52,7 @@ function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?:
ref={ref} ref={ref}
className={styles.mentionTag} className={styles.mentionTag}
onMouseEnter={() => { onMouseEnter={() => {
if (!isAudio && thumbUrl && ref.current) { if (!isAudio && showThumb && ref.current) {
const rect = ref.current.getBoundingClientRect(); const rect = ref.current.getBoundingClientRect();
setPos({ top: rect.top - 8, left: rect.left + rect.width / 2 }); setPos({ top: rect.top - 8, left: rect.left + rect.width / 2 });
setHover(true); setHover(true);
@ -59,18 +62,30 @@ function MentionTag({ label, thumbUrl, assetType }: { label: string; thumbUrl?:
> >
{isAudio ? ( {isAudio ? (
<span style={{ marginRight: 3, fontSize: 13, verticalAlign: 'middle' }}></span> <span style={{ marginRight: 3, fontSize: 13, verticalAlign: 'middle' }}></span>
) : thumbUrl ? ( ) : showThumb ? (
<img <img
src={tosThumb(thumbUrl, 28)} src={tosThumb(thumbUrl, 28)}
alt="" alt=""
style={{ width: 14, height: 14, borderRadius: 3, objectFit: 'cover', verticalAlign: 'middle', marginRight: 3 }} style={{ width: 14, height: 14, borderRadius: 3, objectFit: 'cover', verticalAlign: 'middle', marginRight: 3 }}
onError={() => setThumbBroken(true)}
/> />
) : null} ) : isVideo ? (
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="1.5" strokeLinecap="round" style={{ verticalAlign: 'middle', marginRight: 3, opacity: 0.6 }}>
<rect x="2" y="4" width="20" height="16" rx="2" />
<path d="M10 9l5 3-5 3V9z" fill="currentColor" stroke="none" />
</svg>
) : (
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="1.5" strokeLinecap="round" style={{ verticalAlign: 'middle', marginRight: 3, opacity: 0.6 }}>
<rect x="3" y="3" width="18" height="18" rx="2" />
<circle cx="8.5" cy="8.5" r="1.5" fill="currentColor" stroke="none" />
<path d="M21 15l-5-5L5 21" />
</svg>
)}
{label} {label}
</span> </span>
{hover && thumbUrl && createPortal( {hover && showThumb && createPortal(
<div className={styles.mentionPreview} style={{ top: pos.top, left: pos.left }}> <div className={styles.mentionPreview} style={{ top: pos.top, left: pos.left }}>
<img src={tosThumb(thumbUrl, 200)} alt={label} className={styles.mentionPreviewImg} /> <img src={tosThumb(thumbUrl, 200)} alt={label} className={styles.mentionPreviewImg} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
<div className={styles.mentionPreviewLabel}>{label}</div> <div className={styles.mentionPreviewLabel}>{label}</div>
</div>, </div>,
document.body document.body
@ -149,7 +164,7 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
const [detailPos, setDetailPos] = useState({ top: 0, right: 0 }); const [detailPos, setDetailPos] = useState({ top: 0, right: 0 });
const detailLinkRef = useRef<HTMLSpanElement>(null); const detailLinkRef = useRef<HTMLSpanElement>(null);
const detailLeaveTimer = useRef<ReturnType<typeof setTimeout> | null>(null); const detailLeaveTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
const [refPreview, setRefPreview] = useState<{ url: string; label: string; type: string; top: number; left: number } | null>(null); const [refPreview, setRefPreview] = useState<{ url: string; label: string; type: string; top: number; left: number; isAssetRef?: boolean } | null>(null);
const startDetailLeave = useCallback(() => { const startDetailLeave = useCallback(() => {
if (detailLeaveTimer.current) clearTimeout(detailLeaveTimer.current); if (detailLeaveTimer.current) clearTimeout(detailLeaveTimer.current);
@ -294,11 +309,11 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
onMouseEnter={(e) => { onMouseEnter={(e) => {
if (ref.type === 'audio') return; if (ref.type === 'audio') return;
const rect = e.currentTarget.getBoundingClientRect(); const rect = e.currentTarget.getBoundingClientRect();
setRefPreview({ url: ref.previewUrl, label: ref.label, type: ref.type, top: rect.top - 8, left: rect.left + rect.width / 2 }); setRefPreview({ url: ref.previewUrl, label: ref.label, type: ref.type, top: rect.top - 8, left: rect.left + rect.width / 2, isAssetRef: ref.isAssetRef });
}} }}
onMouseLeave={() => setRefPreview(null)} onMouseLeave={() => setRefPreview(null)}
> >
{ref.type === 'video' ? ( {ref.type === 'video' && !ref.isAssetRef ? (
<video src={ref.previewUrl} className={styles.refMedia} muted /> <video src={ref.previewUrl} className={styles.refMedia} muted />
) : ref.type === 'audio' ? ( ) : ref.type === 'audio' ? (
<div className={styles.audioThumb}> <div className={styles.audioThumb}>
@ -309,7 +324,7 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
</svg> </svg>
</div> </div>
) : ( ) : (
<img src={tosThumb(ref.previewUrl, 112)} alt={ref.label} className={styles.refMedia} /> <img src={tosThumb(ref.previewUrl, 112)} alt={ref.label} className={styles.refMedia} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
)} )}
</div> </div>
))} ))}
@ -421,10 +436,10 @@ export function GenerationCard({ task, onOpenDetail }: Props) {
{/* Reference thumbnail hover preview */} {/* Reference thumbnail hover preview */}
{refPreview && createPortal( {refPreview && createPortal(
<div className={styles.mentionPreview} style={{ top: refPreview.top, left: refPreview.left }}> <div className={styles.mentionPreview} style={{ top: refPreview.top, left: refPreview.left }}>
{refPreview.type === 'video' ? ( {refPreview.type === 'video' && !refPreview.isAssetRef ? (
<video src={refPreview.url} className={styles.mentionPreviewImg} autoPlay loop muted playsInline /> <video src={refPreview.url} className={styles.mentionPreviewImg} autoPlay loop muted playsInline />
) : ( ) : (
<img src={tosThumb(refPreview.url, 300)} alt={refPreview.label} className={styles.mentionPreviewImg} /> <img src={tosThumb(refPreview.url, 300)} alt={refPreview.label} className={styles.mentionPreviewImg} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
)} )}
<div className={styles.mentionPreviewLabel}>{refPreview.label}</div> <div className={styles.mentionPreviewLabel}>{refPreview.label}</div>
</div>, </div>,

View File

@ -46,6 +46,18 @@
transition: background 0.15s, opacity 0.15s; transition: background 0.15s, opacity 0.15s;
} }
.mentionAudioIcon {
display: inline-block;
margin-right: 3px;
font-size: 13px;
vertical-align: middle;
pointer-events: none;
}
.mentionAudioIcon::before {
content: '\266B'; /* ♫ rendered via CSS, not textContent — avoids polluting prompt text */
}
.mentionImg { .mentionImg {
width: 16px; width: 16px;
height: 16px; height: 16px;

View File

@ -88,8 +88,8 @@ export function PromptInput() {
const isAudio = opts.refType === 'audio' || opts.assetType === 'Audio'; const isAudio = opts.refType === 'audio' || opts.assetType === 'Audio';
if (isAudio) { if (isAudio) {
const icon = document.createElement('span'); const icon = document.createElement('span');
icon.textContent = '\u266B'; icon.className = styles.mentionAudioIcon;
icon.style.cssText = 'margin-right:3px;font-size:13px;vertical-align:middle;pointer-events:none'; icon.setAttribute('aria-hidden', 'true');
span.appendChild(icon); span.appendChild(icon);
} else if (opts.thumbUrl) { } else if (opts.thumbUrl) {
const img = document.createElement('img'); const img = document.createElement('img');
@ -98,6 +98,7 @@ export function PromptInput() {
img.setAttribute('width', '16'); img.setAttribute('width', '16');
img.setAttribute('height', '16'); img.setAttribute('height', '16');
img.style.cssText = 'width:16px;height:16px;border-radius:3px;object-fit:cover;vertical-align:middle;margin-right:3px;display:inline-block;pointer-events:none'; img.style.cssText = 'width:16px;height:16px;border-radius:3px;object-fit:cover;vertical-align:middle;margin-right:3px;display:inline-block;pointer-events:none';
img.onerror = () => { img.style.display = 'none'; };
span.appendChild(img); span.appendChild(img);
} }
// @ 前缀隐藏textContent 保留用于模式匹配,视觉上不显示) // @ 前缀隐藏textContent 保留用于模式匹配,视觉上不显示)
@ -253,6 +254,27 @@ export function PromptInput() {
if (!el) return; if (!el) return;
setPrompt(el.textContent || ''); setPrompt(el.textContent || '');
setEditorHtml(el.innerHTML); setEditorHtml(el.innerHTML);
// Sync assetMentions from DOM — prevents stale refs after deleting @mention spans
const mentions: Record<string, unknown>[] = [];
el.querySelectorAll('[data-ref-type="asset"]').forEach((span) => {
const s = span as HTMLElement;
if (s.dataset.assetId) {
mentions.push({
assetId: s.dataset.assetId,
label: s.dataset.assetName || s.textContent?.replace('@', '') || '',
thumbUrl: s.dataset.thumbUrl || '',
assetType: s.dataset.assetType || 'Image',
duration: parseFloat(s.dataset.duration || '0'),
});
} else if (s.dataset.assetGroupId) {
mentions.push({
groupId: s.dataset.assetGroupId,
label: s.dataset.groupName || s.textContent?.replace('@', '') || '',
thumbUrl: s.dataset.thumbUrl || '',
});
}
});
useInputBarStore.setState({ assetMentions: mentions });
}, [setPrompt, setEditorHtml]); }, [setPrompt, setEditorHtml]);
// Remove orphaned mention spans when a reference is deleted // Remove orphaned mention spans when a reference is deleted

View File

@ -3,6 +3,7 @@ import { useInputBarStore } from '../store/inputBar';
import { useGenerationStore } from '../store/generation'; import { useGenerationStore } from '../store/generation';
import { useAuthStore } from '../store/auth'; import { useAuthStore } from '../store/auth';
import { Dropdown } from './Dropdown'; import { Dropdown } from './Dropdown';
import { showToast } from './Toast';
import type { CreationMode, AspectRatio, Duration, GenerationType, ModelOption } from '../types'; import type { CreationMode, AspectRatio, Duration, GenerationType, ModelOption } from '../types';
import styles from './Toolbar.module.css'; import styles from './Toolbar.module.css';
@ -145,7 +146,14 @@ export function Toolbar() {
}, [estimatedTokens, model, references, team]); }, [estimatedTokens, model, references, team]);
const handleSend = useCallback(() => { const handleSend = useCallback(() => {
if (!isSubmittable) return; if (!isSubmittable) {
const s = useInputBarStore.getState();
if (s.mode === 'universal' && s.references.some((r) => r.type === 'audio')
&& !s.references.some((r) => r.type === 'image' || r.type === 'video')) {
showToast('音频不能作为唯一的参考素材,请同时添加图片或视频');
}
return;
}
addTask(); addTask();
}, [isSubmittable, addTask]); }, [isSubmittable, addTask]);

View File

@ -220,9 +220,9 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
if (task.model) store.setModel(task.model as 'seedance_2.0' | 'seedance_2.0_fast'); if (task.model) store.setModel(task.model as 'seedance_2.0' | 'seedance_2.0_fast');
if (task.aspectRatio) store.setAspectRatio(task.aspectRatio as any); if (task.aspectRatio) store.setAspectRatio(task.aspectRatio as any);
if (task.duration) store.setDuration(task.duration); if (task.duration) store.setDuration(task.duration);
// Load references from task // Load references from task (exclude asset library refs — they restore via @mentions in editorHtml)
if (task.references && task.references.length > 0) { if (task.references && task.references.length > 0) {
const refs = task.references.filter(r => r.previewUrl).map(r => ({ const refs = task.references.filter(r => r.previewUrl && !r.isAssetRef).map(r => ({
id: r.id, id: r.id,
file: null as unknown as File, file: null as unknown as File,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,
@ -485,7 +485,7 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
{task.references.map((ref) => ( {task.references.map((ref) => (
<div key={ref.id} className={styles.refItem}> <div key={ref.id} className={styles.refItem}>
<div style={{ position: 'relative', width: 56, height: 56 }}> <div style={{ position: 'relative', width: 56, height: 56 }}>
{ref.type === 'video' ? ( {ref.type === 'video' && !ref.isAssetRef ? (
<video src={ref.previewUrl} className={styles.refImg} muted style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'video' })} /> <video src={ref.previewUrl} className={styles.refImg} muted style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'video' })} />
) : ref.type === 'audio' ? ( ) : ref.type === 'audio' ? (
<div className={styles.refAudioPlaceholder} style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'audio' })}> <div className={styles.refAudioPlaceholder} style={{ cursor: 'pointer' }} onClick={() => ref.previewUrl && setRefMediaPreview({ url: ref.previewUrl, type: 'audio' })}>
@ -496,7 +496,7 @@ export function VideoDetailModal({ task, onClose, onReEdit, onRegenerate, onDele
</svg> </svg>
</div> </div>
) : ref.previewUrl ? ( ) : ref.previewUrl ? (
<img src={tosThumb(ref.previewUrl, 300)} alt={ref.label} className={styles.refImg} style={{ cursor: 'zoom-in' }} onClick={() => setLightboxSrc(ref.previewUrl)} /> <img src={tosThumb(ref.previewUrl, 300)} alt={ref.label} className={styles.refImg} style={{ cursor: 'zoom-in' }} onClick={() => setLightboxSrc(ref.previewUrl)} onError={(e) => { (e.target as HTMLImageElement).style.display = 'none'; }} />
) : ( ) : (
<div className={styles.refAudioPlaceholder} style={{ fontSize: 12, color: 'var(--color-text-disabled)' }}></div> <div className={styles.refAudioPlaceholder} style={{ fontSize: 12, color: 'var(--color-text-disabled)' }}></div>
)} )}

View File

@ -31,14 +31,23 @@ function VideoThumbnail({ video, onClick }: { video: AssetVideo; onClick: () =>
); );
} }
function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://');
}
function assetVideoToTask(v: AssetVideo): GenerationTask { function assetVideoToTask(v: AssetVideo): GenerationTask {
const references = (v.reference_urls || []).map((ref, i) => ({ const references = (v.reference_urls || []).map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${v.task_id}_${i}`, id: `ref_${v.task_id}_${i}`,
type: (ref.type || 'image') as 'image' | 'video', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url, previewUrl: assetRef ? (ref.thumb_url || '') : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
return { return {
id: String(v.id), id: String(v.id),
taskId: v.task_id, taskId: v.task_id,

View File

@ -31,14 +31,23 @@ function VideoThumbnail({ video, onClick }: { video: AssetVideo; onClick: () =>
); );
} }
function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://');
}
function assetVideoToTask(v: AssetVideo): GenerationTask { function assetVideoToTask(v: AssetVideo): GenerationTask {
const references = (v.reference_urls || []).map((ref, i) => ({ const references = (v.reference_urls || []).map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${v.task_id}_${i}`, id: `ref_${v.task_id}_${i}`,
type: (ref.type || 'image') as 'image' | 'video', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url, previewUrl: assetRef ? (ref.thumb_url || '') : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
return { return {
id: String(v.id), id: String(v.id),
taskId: v.task_id, taskId: v.task_id,

View File

@ -59,7 +59,7 @@ function isAssetUrl(url: string): boolean {
return url.startsWith('asset://') || url.startsWith('Asset://'); return url.startsWith('asset://') || url.startsWith('Asset://');
} }
/** Build ReferenceSnapshot[] from raw reference_urls, excluding asset refs. */ /** Build ReferenceSnapshot[] from raw reference_urls (including asset refs with thumb_url). */
function buildReferenceSnapshots( function buildReferenceSnapshots(
refs: Array<Record<string, string>>, refs: Array<Record<string, string>>,
taskId: string, taskId: string,
@ -67,15 +67,23 @@ function buildReferenceSnapshots(
return refs return refs
.filter((ref) => { .filter((ref) => {
const url = ref.url || ''; const url = ref.url || '';
return !isAssetUrl(url) && url.trim() !== ''; // 素材库引用必须有 thumb_url 才能显示缩略图
if (isAssetUrl(url)) return !!(ref.thumb_url);
return url.trim() !== '';
}) })
.map((ref, i) => ({ .map((ref, i) => {
const url = ref.url || '';
const assetRef = isAssetUrl(url);
return {
id: `ref_${taskId}_${i}`, id: `ref_${taskId}_${i}`,
type: (ref.type || 'image') as 'image' | 'video' | 'audio', type: (ref.type || 'image') as 'image' | 'video' | 'audio',
previewUrl: ref.url || '', // 素材库引用用 thumb_url直接上传用原始 url
previewUrl: assetRef ? ref.thumb_url : url,
label: ref.label || `素材${i + 1}`, label: ref.label || `素材${i + 1}`,
role: ref.role, role: ref.role,
})); isAssetRef: assetRef || undefined,
};
});
} }
/** Extract asset mention metadata from raw reference_urls. */ /** Extract asset mention metadata from raw reference_urls. */
@ -610,8 +618,10 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
} }
if (task.mode === 'universal') { if (task.mode === 'universal') {
// task.references only contains file refs (assets filtered in backendToFrontend) // Only include direct file refs — asset library refs are tracked via assetMentions
const references: UploadedFile[] = task.references.map((r) => ({ const references: UploadedFile[] = task.references
.filter((r) => !r.isAssetRef)
.map((r) => ({
id: r.id, id: r.id,
type: r.type, type: r.type,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,
@ -661,8 +671,10 @@ export const useGenerationStore = create<GenerationState>((set, get) => ({
} }
// For regeneration, we need to re-submit with the same TOS URLs // For regeneration, we need to re-submit with the same TOS URLs
// Set up the input bar state, then call addTask // Only include direct file refs — asset library refs go via assetMentions fallback
const references: UploadedFile[] = task.references.map((r) => ({ const references: UploadedFile[] = task.references
.filter((r) => !r.isAssetRef)
.map((r) => ({
id: r.id, id: r.id,
type: r.type, type: r.type,
previewUrl: r.previewUrl, previewUrl: r.previewUrl,

View File

@ -285,10 +285,19 @@ export const useInputBarStore = create<InputBarState>((set, get) => ({
? state.references.length > 0 ? state.references.length > 0
: state.firstFrame !== null || state.lastFrame !== null; : state.firstFrame !== null || state.lastFrame !== null;
if (!hasText && !hasFiles) return false; if (!hasText && !hasFiles) return false;
// Audio cannot be sent alone — must have image or video // Audio cannot be the only reference — Seedance API requires image or video alongside
if (state.mode === 'universal' && state.references.length > 0) { if (state.mode === 'universal') {
const hasImageOrVideo = state.references.some((r) => r.type === 'image' || r.type === 'video'); const hasAudioRef = state.references.some((r) => r.type === 'audio');
if (!hasImageOrVideo && !hasText) return false; const hasAudioAsset = (state.assetMentions || []).some((m: Record<string, string>) =>
(m.assetType || '').toLowerCase() === 'audio');
if (hasAudioRef || hasAudioAsset) {
const hasImageOrVideoRef = state.references.some((r) => r.type === 'image' || r.type === 'video');
const hasImageOrVideoAsset = (state.assetMentions || []).some((m: Record<string, string>) => {
const t = (m.assetType || '').toLowerCase();
return t === 'image' || t === 'video';
});
if (!hasImageOrVideoRef && !hasImageOrVideoAsset) return false;
}
} }
// Block submit if any reference is still uploading or failed // Block submit if any reference is still uploading or failed
if (state.references.some((r) => r.uploading || r.uploadError)) return false; if (state.references.some((r) => r.uploading || r.uploadError)) return false;

View File

@ -32,6 +32,7 @@ export interface ReferenceSnapshot {
previewUrl: string; previewUrl: string;
label: string; label: string;
role?: string; role?: string;
isAssetRef?: boolean;
} }
export interface GenerationTask { export interface GenerationTask {
@ -75,7 +76,7 @@ export interface BackendTask {
result_url: string; result_url: string;
thumbnail_url: string; thumbnail_url: string;
error_message: string; error_message: string;
reference_urls: { url: string; type: string; role: string; label: string }[]; reference_urls: { url: string; type: string; role: string; label: string; thumb_url?: string }[];
is_favorited: boolean; is_favorited: boolean;
seed: number; seed: number;
created_at: string; created_at: string;
@ -406,7 +407,7 @@ export interface AssetVideo {
seconds_consumed: number; seconds_consumed: number;
cost_amount?: number; cost_amount?: number;
aspect_ratio: string; aspect_ratio: string;
reference_urls?: { url: string; type: string; role: string; label: string }[]; reference_urls?: { url: string; type: string; role: string; label: string; thumb_url?: string }[];
created_at: string; created_at: string;
} }

View File

@ -0,0 +1 @@
mysql: [Warning] Using a password on the command line interface can be insecure.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,166 @@
# 正式服增量同步报告(三团队)
## 📌 同步概要
| 项 | 值 |
|---|---|
| **同步时间** | 2026-04-17 20:15 |
| **数据源** | `数据库备份/video_auto_原19-55.sql`(阿里云老库 `rm-7xv1uaw910558p1788o` 19:55 备份) |
| **目标** | 正式服 `mysql-d9bb4e81696d-public.rds.volces.com` / `video_auto` |
| **执行脚本** | `migration_backup/idempotent_sync.py --prod --commit` |
| **备份文件** | `数据库备份/正式服_同步前全库备份_20260417-201347.sql`37M |
| **目标团队** | Team 3 漫堂动漫、Team 4 洁雯团队、Team 12 万物苏网络 |
---
## 💰 金额变动
| 团队 | 同步前 spent | 同步后 spent | 消费增量 | 同步前 balance | 同步后 balance | 余额变化 |
|---|---:|---:|---:|---:|---:|---:|
| Team 3 漫堂动漫 | 3,669.49 | 4,758.79 | **+1,089.30** | 4,603.51 | **3,514.21** | -1,089.30 |
| Team 4 洁雯团队 | 1,318.36 | 5,586.00 | **+4,267.64** | 3,224.64 | **-1,043.00** ⚠️ | -4,267.64 |
| Team 12 万物苏网络 | 6,370.45 | 6,567.73 | **+197.28** | 3,629.55 | **3,432.27** | -197.28 |
| **合计** | **11,358.30** | **16,912.52** | **+5,554.22** | 11,457.70 | 5,903.48 | **-5,554.22** |
> **Team 4 余额变负原因**:阿里云老系统上该团队 7 个老用户jiew/yixiangAI001-006在老系统产生过 409 条生成记录(~4,267 元),本次同步将这些历史记录合并至正式服,导致消费叠加。
---
## ⏱️ 秒数变动
| 团队 | 同步前 sec | 同步后 sec | 增量 |
|---|---:|---:|---:|
| Team 3 漫堂动漫 | 5,475 | 6,561 | **+1,086** |
| Team 4 洁雯团队 | 1,791 | 6,064 | **+4,273** |
| Team 12 万物苏网络 | 5,328 | 5,493 | **+165** |
| **合计** | 12,594 | 18,118 | **+5,524** |
---
## 👤 团队成员变动
| 团队 | 同步前 | 同步后 | 新增用户 |
|---|---:|---:|---|
| Team 3 漫堂动漫 | 21 | 21 | 无新增(阿里云源里 12 用户全部已存在) |
| Team 4 洁雯团队 | 8 | 8 | 无新增(`yixiangAI007` 15:19 已在正式服手动创建id=149 |
| Team 12 万物苏网络 | 11 | 14 | **+3**:杨玉婷(id=155)、钟世怡(id=156)、梅晋滔(id=157) |
| **合计** | 40 | 43 | **+3** |
---
## 📦 其他数据增量
| 数据类型 | 新增量 |
|---|---:|
| 生成记录 generationrecord | **+607**team3=141、team4=451、team12=15 |
| 资产组 assetgroup | +16 |
| 资产 asset | +16 |
| 登录记录 loginrecord | +51 |
| 登录异常 loginanomaly | +6 |
| 活跃会话 activesession | +14 |
| 管理员审计日志 adminauditlog | +6 |
---
## ✅ 保障措施
### 1. 全库备份(可回滚)
```
数据库备份/正式服_同步前全库备份_20260417-201347.sql 37M
```
### 2. 幂等性验证
脚本执行后**立即再跑一次**,所有表均 "新增 0"
```
[4/8] generation_generationrecord
新增 0 generationrecord (team3=0, team4=0, team12=0)
...
[重算 team 统计]
Team 3: 无新增生成记录,跳过
Team 4: 无新增生成记录,跳过
Team 12: 无新增生成记录,跳过
```
### 3. 重复数据扫描(三团队范围内)
| 检查项 | 重复数 |
|---|---|
| task_id | 0 ✅ |
| username | 0 ✅ |
| remote_group_id | 0 ✅ |
| session_id | 0 ✅ |
| loginrecord 复合键 | 0 ✅ |
### 4. 业务唯一键去重逻辑
| 表 | 去重键 |
|---|---|
| accounts_user | username |
| generation_assetgroup | remote_group_id |
| generation_asset | remote_asset_id空则按 group_id+name |
| generation_generationrecord | task_id |
| accounts_loginrecord | (user_id, created_at, ip_address) |
| accounts_loginanomaly | (user_id, login_record_id, rule, created_at) |
| accounts_activesession | session_id |
| accounts_adminauditlog | (operator_id, action, target_id, created_at) |
---
## ⚠️ 待处理事项
### 1. Team 4 洁雯团队余额恢复
当前余额 **-1,043.00**,生成任务会因余额不足被后端拦截。
**建议做法**:通过管理后台 `/admin/teams/4/topup` 给洁雯团队充值至少 **2,000 元**,恢复可用状态并记录 `team_topup` 审计日志。
### 2. 业务方沟通
若洁雯团队用户反馈「突然不能生成视频」,需解释:
> 近期做了一次阿里云老系统的历史数据合并,补齐了你们 7 位老同事的消费记录(共约 4,267 元)。当前余额已补充,请继续使用。
---
## 🔄 后续再次同步流程
旧平台(阿里云)继续有新数据产生时:
```bash
# 1. 重新导出阿里云备份,覆盖旧文件
# 数据库备份/video_auto_原19-55.sql
# 2. 备份正式服(可选但推荐)
mysqldump -h mysql-d9bb4e81696d-public.rds.volces.com -P 3306 -u zyc -pZyc188208 \
--default-character-set=utf8mb4 --single-transaction --skip-lock-tables \
--no-tablespaces --set-gtid-purged=OFF --add-drop-table --databases video_auto \
> "数据库备份/正式服_同步前_$(date +%Y%m%d-%H%M%S).sql"
# 3. dry-run 看增量
python3 migration_backup/idempotent_sync.py --prod
# 4. 确认后 commit
python3 migration_backup/idempotent_sync.py --prod --commit
# 5. 再跑一次验证幂等
python3 migration_backup/idempotent_sync.py --prod
```
---
## 🔙 紧急回滚
若出现严重问题,可从备份完全恢复正式服:
```bash
mysql -h mysql-d9bb4e81696d-public.rds.volces.com -P 3306 -u zyc -pZyc188208 \
video_auto --default-character-set=utf8mb4 \
< "数据库备份/正式服_同步前全库备份_20260417-201347.sql"
```
> ⚠️ 注意:回滚会**同时抹掉本次同步时段内正式服产生的真实业务数据**,慎用。
---
**报告生成时间**2026-04-17
**负责人**zyc

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long