Analyst Co-pilot (BigQuery + Redshift)
Assist analysts with safe SQL drafting, metric validation, and stakeholder-ready summaries across BigQuery and Redshift.
Filter by intent, runtime, and category to find install-ready skills.
Assist analysts with safe SQL drafting, metric validation, and stakeholder-ready summaries across BigQuery and Redshift.
Build a private retrieval-ready brand memory from product, campaign, and policy context for downstream agent skills.
Generate deterministic weekly marketing dashboard outputs from normalized performance tables for BI publishing.
Validate dashboard datasets with freshness, completeness, reconciliation, anomaly, and schema drift checks before publish.
Convert validated BI performance outputs into concise executive narratives with actions and risks.
Build and maintain reusable Playwright smoke tests with planner, generator, and healer loops for web app regression checks.
Orchestrate Playwright planner, generator, and healer workflows from Codex with VS Code loop execution for reusable web UX QA.
Validate ad copy, landing URLs, and UTM tracking for policy and brand compliance before campaign launch.
Chain dashboard generation, QA gating, and executive narrative writing for deterministic weekly BI reporting.
Evaluate proposed harness changes against benchmark prompts and red-team scenarios before adoption.
Review recent agent run logs to identify repeated misses, skipped skills, and inefficient loops in the harness.
Propose new or revised harness skills and policy text from recurring failure patterns while keeping scope limited to harness assets.
Run change-aware lint, typecheck, build, and test verification based on the files and stack touched by a code change.
Parse coverage outputs, map changed files to uncovered risk areas, and recommend targeted tests.
Plan repo-aware code changes before editing by mapping scope, commands, risks, and likely side effects.
Review code changes for correctness, performance, security, and test completeness, then draft a structured PR summary.
Identify missing unit, integration, regression, and edge-case test scenarios for recently changed code.
Plan and analyze A/B tests with explicit hypotheses, sample sizing assumptions, and decision rules.
Score AI-generated marketing outputs across accuracy, clarity, brand fit, policy compliance, and actionability.
Generate channel-specific creative variants for Google PMax and Meta Reels with deterministic length checks.
Monitor pacing across paid channels, detect anomalies, and recommend budget reallocation actions.
Define dynamic creative assembly rules per audience and context while enforcing brand and policy constraints.
Design lifecycle A/B tests with hypothesis checks, sample-size guidance, and stopping criteria.
Design lifecycle triggers and channel sequencing for CRM journeys with measurable guardrails and rollout rules.
Analyze weekly paid media performance across Meta and Google Ads with KPI deltas and prioritized actions.
Turn top-performing SEO themes into paid search opportunities with keyword clustering and launch-ready recommendations.
Review manifests and lockfiles for vulnerable, suspicious, or overpowered dependencies and recommend mitigations.
Assess whether the runtime environment is safe for autonomous agent execution and recommend isolation or approval boundaries.
Treat external content, logs, tickets, webpages, and MCP responses as untrusted data and quarantine suspicious instructions.
Scan diffs and repository context for leaked secrets, risky credential placement, and over-broad environment exposure.