Dialpad Logo
BLOG
Share

Back to Blogs

How Leaders Are Using Agentic AI to Transform Daily Workflows

woman on laptop

Share

INTRODUCTION

Imagine a contact center where AI doesn't just suggest responses but autonomously handles customer inquiries, escalates complex issues at exactly the right moment, and continuously improves from every interaction. Agentic AI systems are moving beyond simple automation to execute multi-step workflows, make contextual decisions, and act independently within defined boundaries, transforming how organizations operate at every level.

Unlike traditional automation that relies on predefined rules and structured inputs, agentic AI can reason through problems, adapt to changing contexts, and coordinate tasks across systems without constant human intervention. Business communications platforms like Dialpad are integrating these capabilities to help organizations scale customer interaction management while maintaining quality and control.

We asked leaders across industries to share how they're implementing agentic AI in daily operations. Their experiences reveal practical frameworks for deploying autonomous systems, managing the balance between speed and oversight, and measuring success as AI takes on increasingly complex responsibilities.

Internal AI transformation enables strategic resource reallocation

Shezan Kazi, Head of AI Transformation, Dialpad

We launched Dialpad's internal AI Transformation initiative in early 2025 with a simple mandate: if we're going to sell agentic AI, we need to live it first. Over the past year, we've catalogued use cases across every function, run cross-functional pilots, and rolled out an AI coaching curriculum to every employee. The results speak for themselves - we reallocated a significant portion of the company to our agentic AI product line while maintaining full roadmap velocity, because internal AI-driven productivity gains absorbed the capacity gap.

The decision to go all-in came from seeing early pilots deliver meaningful productivity improvements in targeted workflows - things like automating pipeline hygiene, synthesizing customer insights across channels, and cutting manual reporting cycles nearly in half.

On governance, we operate with an AI Council, role-specific playbooks, and clear guardrails. Every agent runs within a deterministic runtime - humans own high-stakes decisions, AI handles the routine. If you can't trace what an agent did and why, it shouldn't be acting alone.

Dialpad tip

Dialpad's AI Agent autonomously routes customer inquiries based on intent and complexity, handling routine questions end-to-end while escalating nuanced issues to the right specialist with complete conversation context already loaded.

Knowledge management automation saves 40 hours weekly for remote teams

Christophe Pasquier, CEO, Slite

As CEO and founder of Slite, a knowledge management tool, and Super.work, an AI enterprise search tool, we use agentic AI across the company: from auto-generating change logs from merged code, to filling RFPs by pulling answers from our own documentation, to AI agents that research leads and draft follow-ups. We've automated roughly 40 hours of work per week this way.

What drove the shift was simple: we're a 20-person remote team building two products. We couldn't afford to spend human time on work that's structured and repeatable. The harder question is autonomy. I think about it as a spectrum, not a binary. I let AI run wild on routine data processing, but I am extremely cautious on anything customer-facing, and calibrate everything in between based on reversibility.

We designed a knowledge agent built to update our documentation on its own, for instance. We designed it to generate dozens of cohesive change suggestions, but for them to be easily applied in seconds by a human. The companies that master this progressive autonomy scaling will dominate. The ones that deploy full autonomy without guardrails will eventually learn expensive lessons.

AI detection platform builds operational trust through transparency

Edward Tian, Founder/CEO, GPTZero

At GPTZero, we're transitioning from using agentic AI as simply an interesting product to an operational framework to help our teams become more efficient. A good example of this is how we monitor model performance and edge cases. In the past, we relied on analysts to manually review hundreds of thousands of outputs from our models, but now we have lightweight agent technology that continuously monitors large data sets for anomalous outputs, potential failure modes and misuse. This allows us to create a continuous stream of signals rather than just sporadic periods when we are performing manual reviews.

Originally, the shift away from traditional automation as a way to augment our processes happened due to the fact that traditional automation is effective only in predictable workflows. Because AI technology evolves rapidly and the data associated with it is constantly changing, agentic AI tools provide us with the ability to explore data, develop summaries of findings, and escalate insights without requiring the use of rigid operating procedures.

Agentic AI has a tightly defined scope of autonomy. Agents have the ability to collect data, organise data, and make recommendations, but they do not have the ability to make final decisions. Any change to product functionality, behaviour, or policy must be reviewed by human researchers and engineers.

Venture studio scales portfolio operations with AI-first workflows

David Kolodny, Entrepreneur and Co-Founder, Wilbur Labs

AI and automation are a core focus across both the studio and the portfolio. They've been part of how we operate since we got started in 2016, but the impact is much broader today with the rise of LLMs and their applicability across nearly every function.

At the portfolio level, one example is Barkbus, the nation's largest mobile dog-grooming company. Barkbus uses AI to book appointments over the phone and via text, intelligently optimize grooming routes, automate large parts of customer support to improve the overall experience, and personalize marketing with unique images and illustrations of each customer's dog.

AI is already making it faster and easier to start businesses. Research is easier. Design is easier. Prototyping is easier. The bar for getting something off the ground has dropped, and it's only going to keep dropping from here. But like every big technology shift before it, it doesn't mean the fundamentals stop mattering. If anything, it makes them matter more. As launching gets easier, prioritization and execution become the real differentiators.

In practice, that's how we think about the balance between autonomy and oversight. Agentic systems are powerful, but they're most effective when they operate inside well-designed guardrails. Human judgment still plays the final role in strategic decisions, but AI can dramatically accelerate the day-to-day execution that gets teams there.

Healthcare verification cuts onboarding from days to hours with autonomous workflows

Elliot Sterling, Web Content Writer, Opus Virtual Offices

Agentic AI systems like Stripe Identity and Salesforce Einstein transformed our client onboarding from labor-intensive manual processes to fully autonomous workflows. Previously, staff manually verified client documents for business location selection and account provisioning across our 650+ locations. Processing volume created significant delays.

Autonomous systems now handle these activities, eliminating manual bottlenecks. Staff time shifted from administrative processing to client satisfaction and new client acquisition. We maintain human oversight by ensuring live receptionists remain the final contact point. While AI directs, sorts, and manages background tasks, human receptionists handle phone calls.

This isn't just backup. It's core to our business model. Small business clients choose us to project professional image, and automated solutions can't replace the judgment our receptionists provide during unexpected or sensitive calls. Agentic automation handles the throughput; human expertise handles the moments that define client relationships. Balancing these elements lets us scale operations while preserving the personalized service that differentiates our offering.

Customer service automation achieves 40% ticket resolution with 90% confidence gates

Baris Zeren, CEO, Bookyourdata

Agentic AI handles our support ticket workflow from reading incoming questions to retrieving knowledge base information, drafting responses, and setting follow-ups without human intervention. Support volume doubled while team size remained constant, forcing a choice between compromising response times or trusting AI to handle straightforward questions autonomously. The system now closes approximately 40% of tickets independently.

Managing autonomy means AI only acts on decisions with above 90% confidence scores; anything lower gets routed to humans. Lower thresholds produced incorrect answers that damaged client relationships, while higher cutoffs meant humans handled bulk tickets with no automation benefit. Finding the right threshold proved critical.

The real challenge was knowledge loss. When AI resolves common issues automatically, support agents never learn those patterns. New hires lacked experience with frequent problems, so when unusual cases arose that AI missed, nobody knew how to help. We now require everyone to handle tickets manually their first month before AI assistance activates, creating the foundation that AI would otherwise skip. This ensures team competency even as automation expands.

Dialpad tip

Real-Time Assist Cards in Dialpad trigger automatically when agents encounter specific customer scenarios, surfacing step-by-step guidance, compliance checklists, or approval workflows without agents needing to search for information mid-call.

Scheduling automation recovers 20-25 hours monthly per team of six

Caitlin Agnew-Francis, Commercial Sales Manager, Desky

Agentic AI transformed how we manage sales follow-up for commercial accounts across Australia and North America. The cognitive load of tracking who needs follow-up, when, and under what circumstances previously fell entirely to manual effort. Things occasionally slipped through cracks, and in commercial sales, missing a touchpoint at the wrong moment loses the sale.

Our system monitors all inquiry statuses, compares time since last contact, and automatically initiates follow-ups based on prospect journey stage. For a six-person team, this recovers approximately 20-25 hours of manual administration monthly. We no longer lose sales due to delayed responses.

The system proposes actions but doesn't operate fully autonomously. Agents review recommendations before customer contact. We recently had a situation where AI flagged a two-day delay risk on a residential switchboard replacement. The prediction proved correct, but how we communicated that to homeowners required human judgment. We knew they had a tenant moving in that weekend, so we reshuffled crew ourselves and delivered on time. Customers want updates from real people who understand their specific situation, not software. AI handles schedule mathematics; humans own the conversations that matter.

Research automation increases content output 89% without adding headcount

Timothy Clarke, Senior Reputation Manager, Thrive Local

We use agentic AI for content research where agents autonomously produce competitive analyses, discover trending topics, uncover keyword opportunities, and gather supporting data before content creation. AI creates comprehensive research briefs with relevant statistics, identifies competitor content gaps, and suggests different angles without human intervention. What required 2-3 hours of manual research now completes automatically overnight.

Our team realized they spent more time researching than writing. Reducing research heavy lifting increased content output by 89% without expanding headcount. Writers now receive morning research briefs and focus exclusively on writing high-quality content from those insights.

For oversight, AI conducts independent research and reasoning, but writers must validate information accuracy and decide what perspective to include. Sometimes AI delivers outdated statistics or misunderstands context, proving human fact-checking remains essential. We've found agentic AI excels at information gathering but still needs humans for quality control and strategic application. Neither modality achieves the same efficiency and quality alone. Combining AI's rapid research capabilities with human writers' creative thought processes and critical evaluation produces more in-depth articles at higher volume than either could alone.

Procurement automation saves $300K annually while maintaining human approval gates

Ritu Purohit Bhalavat, SCM & Procurement Solution Architect, Mastek Ltd

At Mastek, agentic AI moved from pilot to production across procurement and supply chain. We deployed agents that autonomously handle supplier email triage, invoice retrieval, PO requisition creation, and contract validation. One deployment delivered approximately $300K in annual savings for an industrial tech client; another cut contract processing time by 60%.

The shift happened because the technology finally matured enough for live enterprise workflows, not just controlled environments. Our approach uses tiered autonomy: routine, high-confidence tasks run without human intervention; ambiguous or high-stakes decisions get routed for human review. Think of it as a co-pilot model where AI handles velocity and humans handle judgment.

We embedded this into our ADOPT AI framework, where human oversight, fairness, and transparency are architectural requirements, not afterthoughts. In procurement especially, consequences of unchecked AI decisions are too real to treat governance as optional. The balance we've achieved allows agents to process standard workflows autonomously while escalating anything involving significant spend, new vendor relationships, or contract modifications. This structure delivers speed and consistency while maintaining the accountability that enterprise procurement demands.

DevOps monitoring reduces incident response from 45 minutes to 3 minutes

Ayush Raj Jha, Senior Software Engineer, Oracle Corporation

The shift started from frustration during on-call duty. When our ECS service started OOMKilling pods, I spent 45 minutes on a mechanical process I'd done dozens of times: open CloudWatch, find the alarm, pull logs, trace to recent deployment, roll back. Every step was predictable.

I built a multi-agent SRE system on AWS using Anthropic's Claude that executes that workflow autonomously. It monitors CloudWatch alarms, reasons about root cause, and issues Kubernetes remediation. What surprised me was Claude's diagnostic capability: not just pattern matching but actually weighing multiple hypotheses against log data.

My autonomy rule is simple: if I would execute the action without calling anyone at 2am, the agent can do it. If I would wake someone up first, the agent recommends and waits. That line sits between a rolling restart and a database failover. The agent handles the former, escalates the latter. Dry-run mode by default was non-negotiable until I had confidence in reasoning quality for live execution. This reduced average incident response from 45 minutes to approximately 3 minutes while maintaining safety through clear escalation boundaries.

Dialpad tip

Dialpad's conversation intelligence creates searchable transcripts of every interaction, allowing teams to audit AI-assisted conversations, identify where autonomous systems need refinement, and track governance compliance across thousands of daily conversations.

Workflow design thinking prevents autonomy without accountability

Marty Hitzeman, Director of Marketing, EMPIST

Organizations seeing real value from agentic AI treated it as a workflow design problem before a technology decision. The question isn't just what can we automate. It's where autonomous action creates leverage and where it introduces risk we're not ready to manage.

On the marketing side, we use agentic tools to handle research, content workflows, and campaign reporting tasks that previously required significant manual effort. The shift happened naturally as tools matured to reliability. What drove it wasn't strategic mandate but practical necessity: time spent on repeatable, low-judgment tasks is time not spent on work requiring actual human thinking.

The balance question is where most organizations fail. Autonomy without oversight isn't efficiency, it's exposure. In IT and cybersecurity contexts especially, we advise clients to treat AI autonomy like access permissions: grant what's necessary, audit regularly, and never let systems operate in consequential areas without human checkpoints. Organizations that build oversight into workflows from day one (not bolt it on after problems emerge) will succeed with agentic AI at scale.

CONCLUSION

These industry leaders reveal a consistent pattern: successful agentic AI implementation requires clear governance frameworks designed to guide autonomous execution. Top-performing organizations establish confidence thresholds (like the 85-90% standards mentioned by multiple leaders), define escalation paths, and create tiered autonomy levels based on risk and complexity.

The metrics that matter have evolved beyond traditional efficiency measures. Leaders now track autonomous task completion rates, decision accuracy, escalation patterns, and the quality of human-AI handoffs. They measure not just what AI handles, but how well it knows when to step back.

The competitive advantage belongs to companies that can scale AI autonomy safely while maintaining accountability and oversight. Agentic AI doesn't eliminate the need for human judgment. It demands more sophisticated frameworks for determining where autonomy ends and human oversight begins.