AI ToolsCode DevelopmentCursor Automations
Cursor Automations
Paid
Visit
Cursor Automations

You're now managing dozens of AI coding agents simultaneously — and Cursor just shipped the thing that manages them for you while you sleep.

Cursor Automations: Your AI Coding Agents Now Run Without You

Cursor Automations — Fast Facts (March 5, 2026):

  • Released: March 5, 2026 — Cursor's most significant platform expansion since the original IDE launch
  • What it is: Always-on agents that run on schedules or are triggered by events from Slack, Linear, GitHub, PagerDuty, and webhooks — each spinning up a cloud sandbox that follows your instructions using configured MCPs and models
  • The scale: Cursor estimates it runs hundreds of automations per hour internally — code review, incident response, and weekly digests running silently while engineers work on something else
  • The business context: Cursor's annual revenue has grown to over $2 billion — doubling over the past three months. Market share holds at roughly 25% of generative AI clients despite competition from OpenAI and Anthropic
  • The shift: "It's not that humans are completely out of the picture," said Jonas Nelle, Cursor's engineering chief for asynchronous agents. "It's that they aren't always initiating."

The problem with agentic coding was supposed to be capability. Could AI write good enough code? Could it understand context well enough? Could it handle real codebases? Those questions are mostly answered — and they revealed a new problem nobody fully anticipated.

As agentic coding spreads, the working life of a software engineer has become dazzlingly complex. A single engineer might oversee dozens of coding agents at once, launching and guiding different processes as necessary. It's a lot to keep track of, and human engineers' attention has quickly become the limiting resource. You solved the "AI can't code" problem and created the "I can't supervise this many AIs" problem. The bottleneck moved from model capability to human attention.

In many AI-assisted coding environments today, developers follow what is commonly described as a "prompt-and-monitor" workflow. Engineers instruct an AI system, examine its output, and then provide the next prompt. While this process can speed up development, it also requires constant attention. Every agent you launch is a context switch you're committed to making when it finishes. Every PR review, every incident, every bug report — a human has to kick it off. That's not automation. That's delegation with extra steps.

Cursor Automations is the answer to that problem. The product is aimed at engineering teams that are already using agentic coding heavily and now need help with the slower parts of development — review, monitoring, incident handling, and maintenance — turning Cursor from a coding assistant into a workflow layer that keeps operating after a developer steps away. Here's exactly how it works, what you can build with it, what Rippling is already doing with it, and how to set up your first automation in under 10 minutes.

What Cursor Automations Actually Does (The Non-Marketing Version)

When an automation fires, Cursor spins up an isolated cloud-based sandbox, executes instructions with configured models and multi-cloud providers (MCPs), and self-verifies its results. Agents gain access to a memory tool that enables them to retain information from previous executions and refine performance over time.

That last sentence is the one worth sitting with. Agents have access to a memory tool that lets them learn from past runs and improve with repetition. This isn't just automation in the cron-job sense — each agent run makes the next run better. A PR review automation that gets corrected on your team's style preferences in week one doesn't repeat that mistake in week two. The system is building institutional knowledge about your codebase and your team's standards, stored and applied automatically.

Every Trigger Type — What Starts an Automation

Automations run on schedules or are triggered by events from Slack, Linear, GitHub, PagerDuty, and webhooks. Here's what each trigger enables in practice:

GitHub Pull Requests

A security review agent activates on every push to the main branch, analyzing diffs for vulnerabilities, bypassing previously discussed issues, and notifying teams via Slack for high-risk findings. PR risk assessment goes further: the system assesses pull request risk based on blast radius, technical complexity, and infrastructure impact. Low-risk changes are approved automatically, while higher-risk updates prompt reviewer assignments based on contribution history. Decisions are summarized in Slack and logged to Notion via MCP integrations so teams can review and refine agent behavior over time.

PagerDuty Incidents

When PagerDuty detects an incident, an automation launches an agent that uses Datadog integrations to investigate logs and examine recent code changes. The agent notifies on-call engineers in Slack with monitoring details and proposes a fix through an automated pull request. The incident response loop that previously required a groggy on-call engineer to manually dig through logs at 3am now runs in seconds, hands the engineer a diagnosis and a proposed fix, and waits for a human to approve the merge.

Slack Messages

Bug report triage flows check for duplicates, create issues in Linear, investigate root causes, and post summaries back in the original Slack thread. Someone posts a bug report in your #bugs channel — the automation responds in that same thread with: duplicate status, root cause investigation results, a proposed fix, and a Linear ticket already created. The engineer who filed the report gets an answer in the thread they filed it in without tagging anyone.

Linear Issues

When a new issue lands in Linear, an automation can investigate the codebase for context, identify the relevant files and recent changes, add that context to the issue, and assign it to the right engineer based on contribution history — all before any human has seen the notification.

Schedules (Cron)

Each morning, agents review recently merged code to identify areas lacking test coverage. Teams can set up weekly repository digests — weekly Slack summaries of repository changes, highlighting merged pull requests, bug fixes, technical debt resolutions, and updates to security or dependencies. The Monday morning "what happened last week" meeting becomes a Slack message that was already waiting when you logged in.

Custom Webhooks

Any system that can fire a webhook can trigger a Cursor Automation — CI/CD pipelines, internal tooling, third-party services, customer-facing events. If your deployment pipeline completes, a webhook can trigger a post-deploy smoke test automation that checks logs, validates key endpoints, and posts results to #deployments before the engineer who pushed the deploy has closed their laptop.

Real-World Use: What Rippling Built With Automations

Rippling is the most detailed public case study for Cursor Automations — and it illustrates the difference between "automation as a concept" and "automation in a real engineering org."

Engineers at Rippling have deployed automations for personal and team workflows. One setup aggregates meeting notes, action items, TODOs, and video links from Slack, combining them with GitHub pull requests, Jira issues, and mentions to produce deduplicated dashboards every two hours. Additional automations handle Jira issue creation from Slack threads, discussion summaries in Confluence, incident triage, weekly status reports, and on-call transitions. Shared automations extend these benefits team-wide.

The key phrase is "deduplicated dashboards every two hours." Engineering context is scattered across Slack threads, GitHub notifications, Jira tickets, meeting recordings, and calendar invites. A senior engineer at a company like Rippling spends a meaningful portion of their day aggregating that context manually — reading back through Slack to remember what was decided, checking GitHub to see what got merged, pulling up Jira to see ticket status. Cursor Automations runs that aggregation every two hours automatically and surfaces the result in one place.

Tim Fall, a Senior Staff Software Engineer at Rippling, said: "Automations have made the repetitive aspects of my work easy to offload. By making automations to round up tasks, deal with doc updates, and respond to Slack messages, I can focus on the things that matter. Anything can be an automation!"

The Bugbot Origin Story: How Automations Was Built

One early example is Bugbot, a long-standing Cursor feature that the team sees as a predecessor to the broader Automation system. The Bugbot system is triggered every time an engineer makes an addition to the codebase and reviews the new code for bugs and other issues. Using Automations, Cursor has been able to expand that system to more involved security audits and more thorough reviews. "This idea of thinking harder, spending more tokens to find harder issues, has been really valuable," said engineering lead Josh Ma.

Bugbot was the proof of concept. It ran on every commit, found real bugs, and engineers trusted it enough to stop manually reviewing low-risk changes. That trust — built over thousands of Bugbot runs — is what made the broader Automations platform viable. The team had already demonstrated that agents could be trusted to run continuously without human initiation. Automations generalizes that pattern to every other recurring engineering task.

The "Conveyor Belt" Model: What Changes for Engineers

The tool addresses the complexity of overseeing dozens of coding agents simultaneously. By automating agent launches, the system seeks to reduce the manual tracking required in agentic coding environments. This moves the workflow toward a "conveyor belt" model where human intervention is targeted and specific.

"In the abstract, anything that an automation kicks off, a human could have also kicked off," said Jonas Nelle. "But by making it automatic, you change the types of tasks that models can usefully do in a codebase." That distinction matters. It's not just about saving the 30 seconds it takes to manually start a PR review. It's about the tasks that simply don't happen in a prompt-and-monitor world because nobody has the bandwidth to initiate them — continuous test coverage monitoring, daily security scans, proactive technical debt identification. Those tasks were always valuable. They just never got done because initiating them competed with everything else for engineer attention. Automations makes them free.

The Competitive Context:

The launch occurs amid competition from OpenAI and Anthropic in the agentic coding space. Ramp data indicates Cursor's market share is holding steady at roughly 25% of generative AI clients. Despite the competition, Cursor's financial growth remains high — annual revenue has grown to over $2 billion, doubling over the past three months. OpenAI Codex has agents that run in cloud environments. Claude Code has long-running autonomous task execution. GitHub Copilot Coding Agent handles PR automation natively. What Cursor Automations does differently: it runs across your entire stack simultaneously — code review, incident response, bug triage, documentation, and cross-tool coordination — as a unified platform rather than individual point solutions. The memory layer that improves with every run is also unique — no competitor has published an equivalent persistent learning mechanism for automated agents.

How to Set Up Cursor Automations (Step by Step)

Getting Started:

  1. Go to cursor.com/automations — requires a Cursor account with a paid plan
  2. Click "New Automation" or start from a template (templates available for PR review, incident response, test coverage, and weekly digest)
  3. Select your trigger: GitHub, Slack, Linear, PagerDuty, Schedule, or Webhook
  4. Configure your MCP connections — connect Slack, Datadog, Notion, Jira, or any other tool your automation needs to read from or write to
  5. Write your instruction: describe what you want the agent to do in plain English — same style as a Cursor prompt
  6. Set your model: choose from configured models (Claude Sonnet 4.6, GPT-5.3, Gemini 3.1 Pro, or others you've set up in Cursor settings)
  7. Enable memory: toggle "Learn from past runs" to let the agent improve with each execution
  8. Test: click "Run Now" to fire the automation manually and verify output before making it live

Setting Up the PR Review Automation (Most Common Starting Point):

  1. Trigger: GitHub → "Pull request opened" or "Pull request merged"
  2. Connect GitHub MCP and Slack MCP
  3. Instruction example: "Review this PR for security vulnerabilities, style inconsistencies, and missing test coverage. If risk is high (blast radius affects core auth or payments), post findings to #code-review in Slack and suggest a reviewer based on recent contributions to affected files. If risk is low, auto-approve and post a one-line summary to Slack."
  4. Enable memory: the agent will learn your team's style preferences and stop flagging things your team has explicitly accepted
  5. Set notification threshold: only page Slack for high-risk findings to avoid noise

Setting Up the PagerDuty Incident Response Automation:

  1. Trigger: PagerDuty → "Incident created" or "Incident acknowledged"
  2. Connect PagerDuty MCP, Datadog MCP, GitHub MCP, Slack MCP
  3. Instruction example: "When an incident fires, immediately query Datadog logs for the affected service over the last 2 hours. Check GitHub for code changes merged to main in the last 24 hours that touch the affected service. Post a diagnostic summary to the incident's Slack channel including: likely root cause, recent relevant code changes, and a proposed fix as a draft PR. Tag the on-call engineer."
  4. Result: on-call engineer opens Slack to find a diagnosis and a draft fix already waiting — not a blank incident page

Cursor Automations vs. GitHub Copilot Coding Agent vs. Claude Code

Factor Cursor Automations GitHub Copilot Agent Claude Code
Trigger types Slack, GitHub, Linear, PagerDuty, Schedule, Webhook GitHub events only Manual / terminal only
Always-on (no human initiation) ✅ Core feature ⚠️ Limited to GitHub events ❌ Human-initiated
Memory across runs ✅ Learns from past runs ❌ Per-session only
Cross-tool coordination ✅ Slack + Datadog + Notion + Jira + GitHub simultaneously GitHub + limited integrations Via MCP tools
Incident response ✅ PagerDuty → logs → fix → Slack ⚠️ Manual trigger only
Model choice ✅ Any — Claude, GPT, Gemini, Grok GPT-5.3 Codex primarily Claude only
IDE integration ✅ Cursor IDE native ✅ VS Code native Terminal / any IDE via CLI
Pricing Included with Pro ($20/mo) / Business ($40/user/mo) Included with Copilot Pro+ ($19/mo) Included with Claude Pro ($20/mo)

Cursor Automations Pricing

Plan Price Automations Access Best For
Hobby Free Limited — manual triggers only, no scheduled automations Testing and evaluation
Pro $20/month Full Automations access — all triggers, memory, MCP integrations, cloud sandbox Individual engineers, freelancers
Business $40/user/month Full Automations + shared team automations, admin controls, SSO, audit logs, centralized billing Engineering teams, organizations

Usage costs (cloud sandbox compute):

Each automation run spins up a cloud sandbox — compute costs apply separately from the plan fee based on sandbox runtime and tokens consumed by configured models. High-frequency automations (e.g., triggering on every commit to a busy repo) will accumulate meaningful usage costs. Start with lower-frequency triggers (daily schedules, manual webhooks) to calibrate costs before enabling per-commit triggers on active repositories. Monitor usage in cursor.com/settings/usage.

Frequently Asked Questions

What Is Cursor Automations?

Cursor Automations is a system designed to create agents that operate continuously within coding environments, activating based on specific schedules or external triggers such as incoming Slack messages, newly created Linear issues, merged GitHub pull requests, or PagerDuty incidents. Each automation spins up an isolated cloud sandbox, runs your instructions using configured tools and models, self-verifies results, and learns from past runs to improve over time.

When Did Cursor Automations Launch?

March 5, 2026 — announced via Cursor's official X account and changelog at cursor.com/changelog/03-05-26. The feature builds on Cursor's existing Bugbot system, which has been running automated code review on commits since mid-2025.

Is Cursor Automations Free?

Basic access is available on the free Hobby plan with limitations. Full Automations — all triggers, memory, MCP integrations, and cloud sandbox execution — requires Cursor Pro at $20/month or Business at $40/user/month. Cloud sandbox compute costs apply on top of plan fees based on usage.

How Is Cursor Automations Different From Just Running Cursor Manually?

"In the abstract, anything that an automation kicks off, a human could have also kicked off. But by making it automatic, you change the types of tasks that models can usefully do in a codebase." The difference isn't just speed — it's that certain valuable tasks (continuous security scanning, test coverage monitoring, proactive technical debt flagging) never happen in a manual workflow because nobody has bandwidth to initiate them. Automations makes those tasks free by removing the human initiation requirement entirely.

What Tools Does Cursor Automations Integrate With?

Native trigger integrations: GitHub, Slack, Linear, PagerDuty, custom webhooks, and schedules. MCP tool integrations (configured by you): Datadog, Notion, Jira, Confluence, and any other tool with an available MCP server. One team built a software factory using Cursor Automations with Runlayer MCP and plugins, stating: "We move faster than teams five times our size because our agents have the right tools, the right context, and the right guardrails."

Can Multiple Team Members Share Automations?

Yes — on the Business plan. Shared automations let team members use and build on each other's configured workflows. An automation one engineer creates for PR risk assessment is available to the entire team without each person recreating it. Admin controls let team leads manage which automations are shared, who can create new ones, and audit logs track every run.

What Models Can Cursor Automations Use?

Any model configured in your Cursor settings — Claude Sonnet 4.6, Claude Opus 4.6, GPT-5.3 Codex, GPT-5.3 Instant, Gemini 3.1 Pro, Grok 4.1, and others. You select the model per automation — use a cheaper, faster model for high-frequency low-stakes triggers (e.g., commit-level style checks) and a more capable model for complex, infrequent tasks (e.g., monthly architecture audits).

Is Cursor Automations the Same as Cursor Background Agent?

Related but distinct. Cursor's Background Agent (launched 2025) is a long-running agent you manually start that works independently on a task while you do something else. Cursor Automations is triggered automatically by external events or schedules — no human initiation at all. Background Agent is for tasks you decide to delegate. Automations is for tasks that shouldn't require a human decision to start in the first place.

Cursor Automations Alternatives

Similar tools in Code Development

GPT-5.3 Codex-Spark

GPT-5.3 Codex-Spark

No ratings
AI Coding AssistantPaid
GPT-5.3 Codex

GPT-5.3 Codex

No ratings
AI Coding AssistantPaid
SkillMaps

SkillMaps

No ratings
AI Coding AssistantFreemium
Codeium

Codeium

No ratings
AI Coding AssistantFreemium
GitHub Copilot Workspace

GitHub Copilot Workspace

No ratings
App DevelopmentFreemium
Antigravity

Antigravity

3.5
App DevelopmentFreemium
Cursor

Cursor

5.0
App DevelopmentFreemium
v0

v0

5.0
App DevelopmentFreemium
Windsurf

Windsurf

No ratings
App DevelopmentFreemium
BlackBox AI

BlackBox AI

5.0
App DevelopmentFreemium
Lovable AI

Lovable AI

5.0
No-Code App BuildersFreemium
Replit Agent v3

Replit Agent v3

2.0
No-Code App BuildersPaid
Replit Agent v2

Replit Agent v2

No ratings
No-Code App BuildersPaid
Ask Codi

Ask Codi

No ratings
Code OptimizationFreemium
Workik AI

Workik AI

No ratings
Code OptimizationFreemium
Raygun

Raygun

No ratings
Code OptimizationPaid
Code Mentor AI

Code Mentor AI

No ratings
Code OptimizationFreemium
GTmetrix

GTmetrix

No ratings
Code OptimizationFreemium
Cloud Defence

Cloud Defence

No ratings
Code OptimizationFreemium
AppDynamics

AppDynamics

No ratings
Code OptimizationFreemium
Dynatrace

Dynatrace

No ratings
Code OptimizationFreemium
New Relic

New Relic

No ratings
Code OptimizationPaid
Taskade

Taskade

No ratings
Code OptimizationFreemium
Appli Tools

Appli Tools

No ratings
Code TestingFreemium
LambdaTest

LambdaTest

No ratings
Code TestingFreemium
BrowserStack

BrowserStack

No ratings
Code TestingFreemium
Appium

Appium

No ratings
Code TestingFreemium
Smart Bear

Smart Bear

No ratings
Code TestingPaid
Cypress

Cypress

No ratings
Code TestingFreemium
Cucumber

Cucumber

No ratings
Code TestingFreemium
Test Sigma

Test Sigma

No ratings
Code TestingFreemium
Codium

Codium

No ratings
Code TestingFreemium
Selenium

Selenium

No ratings
Code TestingFreemium
TrackJS

TrackJS

No ratings
Code DebuggingPaid
OverOps

OverOps

No ratings
Code DebuggingFreemium
Honeybadger

Honeybadger

No ratings
Code DebuggingFreemium
GlitchTip

GlitchTip

No ratings
Code DebuggingFreemium
LogRocket

LogRocket

No ratings
Code DebuggingFreemium
Bugsnag

Bugsnag

No ratings
Code DebuggingFreemium
Raygun Debug

Raygun Debug

No ratings
Code DebuggingPaid
Airbrake

Airbrake

No ratings
Code DebuggingPaid
Rollbar

Rollbar

No ratings
Code DebuggingFreemium
Sentry

Sentry

No ratings
Code DebuggingFreemium
Codara

Codara

No ratings
Code ReviewPaid
SonarQube

SonarQube

No ratings
Code ReviewPaid
PullRequest

PullRequest

No ratings
Code ReviewFreemium
Code Rabbit AI

Code Rabbit AI

No ratings
Code ReviewFreemium
ZZZCode AI

ZZZCode AI

No ratings
Code ReviewFreemium
Reviewable

Reviewable

No ratings
Code ReviewPaid
CodeClimate

CodeClimate

No ratings
Code ReviewPaid
Codacy

Codacy

No ratings
Code ReviewFreemium
snyk.io

snyk.io

No ratings
Code ReviewPaid
CodeWP

CodeWP

No ratings
Code EditingFreemium
Sourcery

Sourcery

No ratings
Code EditingPaid
Snyk

Snyk

No ratings
Code EditingPaid
Repl.it

Repl.it

No ratings
Code EditingFreemium
Codota

Codota

No ratings
Code EditingFreemium
Kite

Kite

No ratings
Code EditingFreemium
Tabnine Editor

Tabnine Editor

No ratings
Code EditingFreemium
GitHub Copilot

GitHub Copilot

4.0
App DevelopmentFreemium

Reviews

Real experiences from verified users

-
0 reviews

No reviews yet

Be the first to share your experience