Goal: Enable your X account's dual-agent team to continuously, stably, and unambiguously produce "verifiable, reusable, human-toned" content 24/7; automatically degrade without stopping in any waiting or failure scenarios; achieve maximum ROI with minimum mechanisms.
This is a real public-facing operations project with no "testing/placeholder/simulation" phases. The following red lines must be strictly observed:
If any of the following conditions are met, immediately stop replying to that tweet:
- Comment count ≥ 100 (Most critical! Click link to check in real-time)
- Official/brand/media accounts
- Replying to another reply (reply-of-reply)
- Publication time > 12 hours (baseline; gradient scan tightens to 6 hours)
- Author followers < 20k (hard baseline gate)
30-second pre-publish check: Click tweet link → Check current comment count → If ≥100, immediately stop!
-
Data integrity zero-tolerance policy:
- Any intentional use of test/simulated data masquerading as production data = immediate termination of cooperation
- System now has complete data validation and audit trails - fraud will be immediately detected
- 7×24 production systems do not allow any "going through the motions" - one fraud = permanent trust loss
-
Provenance-only:
- Every id in Longlist/Shortlist must come from the latest fetch (RUBE Recent Search, with expansions/tweet_fields/user_fields complete fields); manual table filling/historical copying are all invalid.
- Fetch→merge table→promote→publish is the only path; no skipping or bypassing.
-
Thresholds not lowered (only Owner can approve):
- Author followers ≥ 20,000 (baseline); views ≥ 20,000 (baseline); comments 5–100; publication time ≤12h.
- System uses gradient scan from baseline progressively tightening to upper limits (50k followers, 80k views, 6h time), sorted by heat score.
- Only reply to original tweets; language follows source tweet (en/ja).
- Quality-over-quantity principle: wait for next round when samples are cold, don't lower standards.
- Any threshold change requires Owner written approval; unapproved changes are invalid and must be rolled back.
-
When candidates are not found:
- System uses gradient scan strategy: 48 combinations (followers 20k→50k × time 12h→6h × views 20k→80k), sorted by heat score, prioritizing high-exposure hot tweets.
- If still less than 2 candidates, consider it "sample too cold this round," wait for next restocking.
- Manual filling and lowering standards are prohibited throughout; restocking not meeting standards only means sample is cold, please continue waiting rather than modifying thresholds. Quality over quantity.
-
Roles and publishing:
- Only PEERA has publishing authority; PEERB and Foreman are strictly prohibited from publishing or changing thresholds.
- Pre-publish only three hard checks: language consistency (en/ja), non-reply-of-reply, Provenance consistency; any failure immediately fix, must not force online.
-
Success criteria and violation handling:
- Success only judged by {ok:true, url} in state/published.json;
- If lowering standards/manual filling/fraud discovered: Foreman immediately requires clearing Longlist and re-fetching by "attempt budget"; any non-compliant output is judged invalid, must not enter promotion and drafting.
- No "daily count" or "daily schedule" rhythm, not using "day" as planning boundary.
- Not relying on emotional hot comments and hollow viewpoints to boost volume; no evidence = no publish.
- This project is 7×24 uninterrupted operations. No inconclusive waiting allowed. Any waiting must have timeout and follow-up action (switch candidates/switch type/delayed publish).
# 1. Install MCP SDK
pip3 install mcp --break-system-packages
# 2. Initialize RUBE connection (auto-obtain connection_id and session_id, save to state/rube.json)
python3 tools/xop-init-rube.py
# 3. Check connection status (optional)
python3 tools/xop-init-rube.py --checkAfter completion, can use auto-publish without manual credential setup each time.
| Metric | Requirement | Check Method |
|---|---|---|
| Comment Count | 5-100 (≥100 immediately stop) | Click link for real-time check |
| Views | ≥20k (baseline) → 80k (gradient upper limit) | Only judge when field exists (or estimate with likes×100) |
| Followers | ≥20k (baseline) → 50k (gradient upper limit) | Check author profile |
| Publication Time | ≤12h (baseline) → 6h (gradient upper limit) | Gradient scan auto-tightens |
| Account Type | Individual creator | Prohibit official/brand/media accounts |
| Tweet Type | Original tweets only | Prohibit reply-of-reply |
- Unified hotspot anchor: Only fetch original tweets (prohibit reply-of-reply), author followers ≥20k (gradient scan from 20k→50k), publication time ≤12h (gradient scan from 12h→6h), comments 5-100 (fixed filter), views ≥20k (gradient scan from 20k→80k, judge when field exists), exclude official/brand/media accounts.
- Heartbeat=30 minutes: 10 minutes restock → 10 minutes consensus/promote → 10 minutes write/publish.
xop-cycle.py=only pipeline:clear(clear last beat, only keep [LOCKED] items) →prep-check(confirm LONG_LIST.md auto-generated byxop-hotspot-refresh.py runand meets hard gates) →promote(advance unique candidate when both votes=Y) → after co-draft completionpublish --text ... --reply <url_or_id>(internally auto-run lint/publish, only allow replying to original tweets).- If current round fetch only produces 2–4 candidates meeting hard gates, allowed to continue (floor=2), but recommend quickly restocking to 5 for better selection; less than 2 considered "sample too cold this round," wait for next beat or expand window re-fetch, rather than lowering thresholds.
- Co-draft: PEERA writes
draft_pad_op, PEERB writesdraft_pad_partner, and markop_ready/partner_readyas Y; must not publish without dual-signature. - Reply/Original time boxes: Original ≤30m, abandon if can't write "stance+Why-Now" in 12m.
- Rate limiting: same author 60min ≤1; external minimum interval 10min; rolling 60min external ≤6 posts.
- Logging: Only record structured fields
time | type | link | delta_type | promotion_triggered | qc_return_count | need_mode.
— File and landing conventions (reduce mental load) —
- Hotspot landing area:
work/hotspot/only acceptsrube-slice-*.json(real RUBE response data blocks). - Fetch parameters and plans:
FETCH_ARGS-*.jsonandFETCH_PLAN-*.mdare "current round disposable products," auto-generated before each round fetch byxop-fetch-hotspot-local.py plan(100% Influencer random sampling, default 10 queries), can be cleaned after fetch ends; long-term retention not required. - Merge snapshots:
latest-*.json/latest.jsonare intermediate products of refresh, auto-archived after execution; no need to focus during operations.
— Foreman takeover (no proxy publishing) —
-
Foreman 15-minute inspections: last beat didn't clear Longlist → pause flag, until Peer clears and restocks; if Shortlist already has active entry, prohibit adding new.
-
xop-lint --require-inventory/--prepublishany failure, Foreman must record reason and push remediation; two consecutive beats fail → Foreman takes over restocking/promotion process (still no proxy publishing). -
Pipeline commands (role constraints):
- Foreman:
python3 tools/xop-preflight.py --role foreman→ (if missing hotspot) remind PEERA to execute fetch → (if necessary)python3 tools/xop-cycle.py clear-active - PEERB:
python3 tools/xop-preflight.py --role peerb - PEERA:
- Initialize (first time only):
python3 tools/xop-init-rube.py(obtain RUBE connection credentials) - Fetch (about 10 minutes, 3 steps):
- Generate plan:
python3 tools/xop-fetch-hotspot-local.py plan→ GenerateFETCH_ARGS-1.json~FETCH_ARGS-10.json(10 query plans, fixed 12h window) - Execute queries: Execute 10 queries in RUBE MCP (
⚠️ Important: execute all, not just 1!)- Read
FETCH_ARGS-1.json→ ExecuteTWITTER_RECENT_SEARCH→ Save response asrube-slice-1.json - Read
FETCH_ARGS-2.json→ ExecuteTWITTER_RECENT_SEARCH→ Save response asrube-slice-2.json - ... Repeat until
FETCH_ARGS-10.json→rube-slice-10.json
- Read
- Verify completion:
ls work/hotspot/rube-slice-*.json | wc -lshould output10⚠️ Fetch issues? See TROUBLESHOOTING.md for common errors and solutions
- Generate plan:
- Merge table:
python3 tools/xop-hotspot-refresh.py run(tool auto-checks data integrity, will error and refuse to continue if less than 10 files) - Check:
python3 tools/xop-cycle.py prep-check - Promote: When both votes=Y
python3 tools/xop-cycle.py promote - Publish: After co-draft completion
python3 tools/xop-cycle.py publish --text "..." --reply <id>(auto-validate P0 + auto-publish P1 + auto-record)
- Initialize (first time only):
- Foreman:
— Heartbeat hard gates —
- Each beat must output "restock complete" or "publish (including failure)" one of the two; if not achieved, next beat first do remediation.
- Stockout red line: Longlist <2 items → prohibit drafting; Longlist target 5 items, 5 different authors; Shortlist no ready entry → prohibit publishing.
- Persona: First-person, young tech practitioner; English/Japanese switch following source tweet language; natural tone with action-orientation.
- Content scope: Focus on hot topics that attract users with high-value insights; focus on AI, tech, social phenomena, life philosophy/inspiration (both safe and traffic-generating). Avoid politics, economics, gossip, vulgar, unverifiable rumors and other sensitive low-quality topics; also avoid professional technical theory topics outside AI (because no traffic).
- Quality hard gate: See §6.0.
- Topic balance: Mainly pursue hotspots, can temporarily adjust topic ratios as needed; no mandatory fixed quotas.
| Role | Responsibilities | Owned "Atomic Actions" |
|---|---|---|
| PEERA | Fetch hotspots, write, publish, log | Only use X hotspot fetch and score; claim Shortlist; compose; publish via RUBE MCP; record minimal logs |
| PEERB | Restock, score, polish | Maintain Longlist; sync scoring with PEERA; pick high-scoring candidates into Shortlist; add hard support points and unify tone; no external publishing |
Note: PEERA's environment is configured with RUBE MCP, all publishing actions executed by PEERA personally (this project is real operations, no simulation phases exist, tweets ready then publish personally and truly via RUBE MCP, must not use AUX); PEERB responsible for restocking, scoring, and polishing. See repository root RUBE-MCP-USAGE.md for specific operations.
RUBE MCP simple tip: Call RUBE_MULTI_EXECUTE_TOOL, both posting and replying use TWITTER_CREATION_OF_A_POST; replying must fill in reply_in_reply_to_tweet_id (important things said three times!!!), pure posting only fill text.
All actions belong to single role; when PEERB times out, PEERA can advance independently (timeout handling see §7).
Heartbeat = 20-30 minutes/beat, split into three Time Boxes; Foreman inspects "whether new beat allowed to start" at 15-minute rhythm.
-
Restock (0–10 minutes)
- Clear last beat Longlist, only keep uncompleted locked entries (if any and not timed out).
- Fill 5 candidates, from 5 different authors; PEERA/PEERB each score and write Y/N in
operator_vote / partner_vote. - At least 3 need to write elimination reason (note), otherwise considered incomplete restocking.
-
Consensus (10–20 minutes)
- Both sides discuss and only pick 1 candidate with both votes=Y, push to Shortlist via
xop-promote.py; at most 1 active row exists in Shortlist. - Each drafts independently: PEERA writes draft in
draft_pad_op, PEERB writes draft indraft_pad_partner(note: each writes one, not revising the other's draft); before completionop_ready / partner_readymust be N.
- Both sides discuss and only pick 1 candidate with both votes=Y, push to Shortlist via
-
Create/Publish (20–30 minutes)
- Reply/Original execute per §6.0 requirements; immediately switch topics on timeout.
- After both complete, each marks
op_ready / partner_ready = Y.
Publishing Decision Flow (must execute each beat):
Iron Rule: At most 1 reply published per heartbeat cycle (unless Owner explicitly authorizes multiple).
-
PEERA and PEERB each show final draft
-
PEERA decides: Choose A / Choose B / merge / skip
-
❌ Prohibit publishing both simultaneously
-
After decision complete, run
python3 tools/xop-lint.py --prepublish→python3 tools/xop-publish.pypublish, then write back outcome. -
If this beat didn't issue valid attempt, must write reason in log, and prioritize remediation next beat.
Any step errors, must make definite action within this beat (discard/degrade/delay), no empty waiting allowed.
System only has two task types: Reply and Original. Section 3 (Heartbeat) is unified operating loop, applicable to all task types.
- Reply: Incremental replies to high-value original tweets (primary mode).
- Original: Original anchor posts (complete story/experiment/conclusion, requires explicit --allow-original authorization).
- Default Reply-first, only create Original when explicitly authorized.
- If Shortlist exhausted or certain topic has no qualified candidates for two consecutive beats, prioritize restocking.
- Target 5 candidates (minimum 2 can continue), see
LONG_LIST.mdfor fields (addedoperator_vote / partner_vote / is_reply). - Heartbeat starts immediately clear last beat content, only allow keeping 1 "uncompleted locked item" (need to mark lock reason in note).
- Current period restock target: 5 different authors, all marked
is_reply = N, PEERA/PEERB both give scoring and voting; at least 3 write in note "why not selected now (e.g., topic duplication, comments reached upper limit)". - Bottom-line requirement: minimum 2 candidates can enter drafting process, but should continuously restock to 5.
Topic Excitement Check (must execute during restock phase): PEERB must ask before scoring: "If I'm not working, just browsing Twitter, would I stop to look at this?"
- If "yes" → continue evaluating and scoring; if "no" → mark N and write in note "topic not exciting enough"
- At most 1 active row per beat (
status ∈ {ready, claimed}), rest keep historical履历. - Only when Longlist both votes are Y, allowed to push into Shortlist using
xop-promote.py; defaultdraft_pad_op / draft_pad_partnerempty,op_ready / partner_ready = N. - PEERA writes draft, PEERB revises and confirms, then separately change ready field to Y; after publishing PEERA writes
outcome_note, PEERB supplementsverification_linkas needed.
- Entry filter: Meet unified hotspot anchor (complete definition see §0+ execution flow).
- Comment count hard gate: Must < 100; ≥100 directly skip (avoid drowning). Must reconfirm 30 seconds before publishing.
- Must first run
python3 tools/xop-lint.pybefore publishing and all pass.
- Hard bottom line: Longlist <2 items → immediately stop Create, return to restock phase.
- Optimization target: Longlist should continuously restock to 5 items, 5 different authors; prioritize restocking when not achieved.
- If Shortlist has no active row (
status=ready/claimed), prohibit drafting or posting; must first complete one dual-signature and passxop-lint --require-inventory.
- Not limited to domains, as long as hotspot is shareable; still maintain "non-lowbrow, non-personal-attack, non-forgery".
- Negative list: NSFW, scams/obvious forgery, zombie interaction, brand hard ads, personal attacks, official accounts.
- Longlist: Always only 5 slots. HeartBeat starts with
xop-cycle.py clear, which:- Preserves dual-Y or
[LOCKED]candidates IF: age <11h AND round_count <3 (votes cleared, round_count+1) - Deletes all other candidates (non-dual-Y, ≥11h, or round_count≥3)
- Rationale: Hot tweet freshness is critical. 11h threshold ensures published tweets stay within 12h window.
- After clearing, restock to 5 candidates; write
operator_vote/partner_vote, markis_reply=N; at least 3 write elimination reason (note).
- Preserves dual-Y or
- Shortlist: At most 1 active row at any time; advancing must go through
xop-promote.py(only for candidates with both votes=Y and non-reply). PEERA writes draft indraft_pad_op, PEERB revises indraft_pad_partner; both must be Ready to publish. - After publishing keep
outcome_note / verification_linkas履历, clear other fields to avoid reuse; if timeout or abandon, immediately withdraw to Longlist and explain reason in note.
Data Quality Filter (DQF):
- If
includes.users[].public_metrics.followers_countmissing/all zero, or username >50% digit string, directly error (reject fake/test packages).
Sampling not hot strategy:
- System uses gradient scan from baseline thresholds progressively tightening to upper limits (followers 20k→50k, views 20k→80k, time 12h→6h).
- Still insufficient → wait for next round restocking or expand Influencer sampling range (increase
--influencer-count). - Prohibit lowering standards, prohibit manual table filling (manual filling will be reported error by
prep-checkand lint).
Role constraints: Only PEERA responsible for fetching and publishing; Foreman/PEERB must not call any RUBE MCP.
Requirement: Write a high-value, novel, eye-catching reply that resonates with readers.
Execution:
- Thoroughly investigate and think for 3 minutes, don't rush to write - Ask yourself: "What kind of reply would make readers stop?"
- Draft within length limit: ≤270 characters — NOT words! (CJK: ≤135 characters) - Keep it concise and impactful.
- Self-check before submit - Ask yourself: "What's the core novelty of this reply?"
- If you can't articulate it in one sentence, or it sounds cliché → rewrite
- Only reply to original tweets: Prohibit reply-of-reply.
- Language consistency (en/ja): Same as source tweet.
- Provenance consistency: Target tweet_id must be in latest fetch provenance; author handle consistent with provenance.username; tweet_id is 8–20 digit number.
- Length limit: ≤270 characters (CJK languages: ≤135 characters, as Twitter counts each CJK char as 2 units).
- Pass method:
python3 tools/xop-lint.py --prepublish(xop-cycle publishwill auto-call).
QC review ≤2 minutes; on timeout switch candidates or do other tasks, avoid blocking heartbeat.
- Only record structured counting; don't write subjective retrospectives, weekly reports, long logs.
- Count fields (record as published):
time | type | link | delta_type | promotion_triggered(bool)qc_return_count(same topic cumulative return count)need_mode(bool)(whether triggered Need)
- Negative protection: When "valid output count" in 24 hours = 0, prioritize executing Reply.
- Other scheduling strategies per §4; if retrospective needed, complete offline, this file makes no requirements.
Recommended three result metrics to focus on (high ROI monitoring):
- BVI hit rate: Among candidates entering Shortlist, ratio of successful publishing.
- 15/60 minute trigger rate: Observe whether interaction triggered 15/60 minutes after publishing.
- 24-hour valid output count: Rolling 24-hour successful publishing count.
- Insist on "evidence first then viewpoint"; viewpoints that can't find evidence within 3 minutes don't write.
- Sensitive topics need double verification and traceable sources; unverifiable exposés don't write.
- Remove link tracking parameters; avoid privacy information and unauthorized materials.
- Heartbeat: 20-30 minutes; review timeout: 2 minutes.
- Publishing rate: minimum interval 10 minutes; rolling 60 minutes external ≤6 posts.
- Hotspot anchor: See §0+ (followers≥20k, time≤12h, comments 5-100, views≥20k).
- Writing time limit: See §6.0; Original abort threshold: framework+verification 12 minutes no "one-sentence stance+Why-Now" then abort, overall ≤30 minutes.
Following this constitution, system runs with single loop and atomic permissions: no "day boundaries," no waiting, can degrade, can self-correct, ensuring 7×24 continuous production of high-value content.