Ark resource full access
Quick Wins by Role
Role-specific AI tasks you can try this week - with example prompts, risk notes, and measurement cues.

Author
Joe Draper
Founder, Arkwright
The best way to build confidence with AI isn't a grand transformation project. It's picking one small task you do every week, making it faster and better with AI, and proving the value.
This guide gives you role-specific starting points - tasks that take 15-30 minutes to set up, show clear before/after improvement, and carry minimal risk. Do one of these well, and you'll have the proof you need to expand.
How to Use This Guide
Each section follows the same structure:
- The Wins - Specific tasks where AI reliably adds value
- Example Approach - How to actually do it
- Risk Notes - What to watch out for
- Measurement - How to prove it worked
Pick one win from your role. Try it this week. Track the results. That's it.
Software and IT
You're already closer to AI than most roles - you understand systems, APIs, and automation. The wins here aren't about replacing your technical skills; they're about eliminating the tedious parts so you can focus on the interesting problems.
The Wins
PR summaries and commit messages
Stop writing the same boilerplate. AI can read a diff and produce a clear, consistent summary of what changed and why.
Example approach:
```
Here's the diff for my PR. Write a summary that covers:
- What changed (2-3 bullet points)
- Why it changed (the ticket/issue being addressed)
- Any breaking changes or migration notes
Keep it under 200 words. Use present tense ("Adds", not "Added").
```
Risk notes: AI sometimes misses the intent behind changes. It'll describe what changed accurately but might miss why it matters. Always sanity-check the "why" section.
Release notes from commit history
Aggregate a sprint's worth of commits into user-facing release notes. The AI handles the tedious categorisation; you handle the polish.
Example approach:
```
Here are the commits from the last two weeks. Generate release notes that:
- Group changes into: New Features, Improvements, Bug Fixes, Technical
- Rewrite commit messages into user-friendly language
- Flag anything that looks like a breaking change
- Ignore commits that are purely internal (refactoring, tests, CI)
Audience: Non-technical product users.
```
Risk notes: AI can't know what's actually user-facing vs internal. You'll need to edit the output to remove irrelevant items and add context for important changes.
Documentation sync after changes
You updated the code. Now update the docs. AI can identify what documentation needs to change based on your diff.
Example approach:
```
Here's the diff for my changes to the authentication module.
Here's the current documentation for that module.
Identify which sections of the documentation are now outdated.
For each, suggest the specific edit needed.
```
Risk notes: AI might suggest documentation changes that are technically accurate but miss the broader context. Review carefully before committing doc changes.
Ticket summaries for handoff
Summarise a messy ticket thread into a clear handoff document for the next person who picks it up.
Example approach:
```
Here's the full ticket history including comments.
Summarise into:
- Original problem (2 sentences)
- What's been tried
- Current status
- Recommended next steps
- Key context the next person needs
```
Risk notes: AI will reflect whatever confusion exists in the thread. If the ticket is genuinely muddled, the summary will be too. Use it as a starting point, not the final word.
Measurement
- ➢Time to write PR summaries (before vs after)
- ➢Documentation accuracy (fewer "docs are outdated" complaints)
- ➢Handoff quality (ask the receiving person)
Finance and Accounting
Finance work is precise, regulated, and high-stakes. AI won't replace your judgement on anything that matters - but it can handle the drafting, categorisation, and first-pass analysis that eats up your week.
The Wins
Invoice and expense descriptions
Turn cryptic line items into clear descriptions. Useful for month-end, audits, and anyone trying to understand what was actually spent.
Example approach:
```
Here's a list of expense line items with vendor names, amounts, and dates.
For each, write a clear 1-sentence description of what it likely is.
Flag any that are ambiguous or need human review.
Format: [Vendor] - [Amount] - [Your description] - [Confidence: High/Medium/Low]
```
Risk notes: AI is guessing based on vendor names and patterns. Medium/low confidence items need human verification. Never auto-categorise anything that affects reporting without review.
Month-end checklist drafts
Generate a first-pass checklist for month-end close based on your typical process.
Example approach:
```
Here's our month-end close checklist from last month.
Here are the open items and notes from this month.
Generate an updated checklist with:
- Standard items carried forward
- This month's specific items added
- Deadlines based on the calendar
- Flag any items that were problematic last month
```
Risk notes: The checklist is only as good as the template you provide. AI can't know about new requirements or changed processes unless you tell it.
Variance notes from the ledger
Draft variance explanations for budget vs actual reviews. AI can identify the patterns; you provide the business context.
Example approach:
```
Here's the budget vs actual for Q3.
For any line item with variance >10%, draft a brief explanation.
Consider:
- ➢Timing differences (spending moved between periods)
- ➢Volume changes (more/fewer units)
- ➢Price changes (rate increases)
- ➢One-time items
Format each as: [Line item]: [Variance %] - [Likely explanation]
```
Risk notes: AI explanations are hypotheses, not facts. They're useful for triggering your memory of what actually happened, not for submitting directly to leadership.
Receipt matching and categorisation
Match receipts to expense categories based on vendor and amount patterns.
Example approach:
```
Here are this month's credit card transactions.
Here's our chart of accounts with category descriptions.
For each transaction, suggest the most likely category.
Flag any that don't clearly fit or might need splitting.
```
Risk notes: Miscategorisation has downstream effects. Use AI suggestions as a starting point, but review before posting.
Measurement
- ➢Time spent on month-end close
- ➢Variance explanations completed per hour
- ➢Categorisation accuracy (spot-check 20 items)
HR and People
HR work is a mix of high-touch human conversations and repetitive documentation. AI excels at the latter while leaving the former to you.
The Wins
First-pass job descriptions
Stop starting from blank. AI can generate a solid first draft based on role requirements.
Example approach:
```
We're hiring a [role] for our [team/department].
Key responsibilities:
- ➢[List 3-5 main duties]
Must-haves:
- ➢[Non-negotiable requirements]
Nice-to-haves:
- ➢[Preferred but not required]
Write a job description that:
- Leads with what the person will actually do
- Avoids jargon and clichés
- Is honest about the role (not oversold)
- Includes salary band: [range]
- Mentions [remote/hybrid/onsite] and [location]
```
Risk notes: AI job descriptions can drift toward generic corporate language. Edit aggressively to sound like your actual company. Also review for unintended bias in language.
Structured interview notes
Turn rambling interview notes into structured summaries.
Example approach:
```
Here are my raw notes from interviewing [candidate] for [role].
Summarise into:
- Key strengths demonstrated (with evidence)
- Concerns or gaps (with evidence)
- Cultural/team fit observations
- Recommended follow-up questions for next round
- Overall recommendation: Strong Yes / Yes / Maybe / No
```
Risk notes: AI might weight things differently than you would. The summary is a draft for your review, not a replacement for your judgement.
Onboarding checklists by role
Generate customised onboarding checklists based on role type.
Example approach:
```
New hire: [name], [role], starting [date], reporting to [manager].
Generate an onboarding checklist covering:
- ➢Day 1: Admin and setup
- ➢Week 1: Orientation and introductions
- ➢Month 1: Role-specific training
- ➢Month 2-3: Ramp targets
Include: IT setup items, key people to meet, systems to learn, first deliverables.
```
Risk notes: Generic checklists miss role-specific nuance. Use as a template, then customise based on what this specific person actually needs.
Policy Q&A with references
Turn your policy documents into an FAQ format with direct quotes and page references.
Example approach:
```
Here's our [leave/expense/remote work] policy document.
Generate a Q&A covering the 10 most common questions employees ask.
For each answer:
- Give a clear, plain-English response
- Quote the relevant policy section
- Include the page/section reference
```
Risk notes: AI might misinterpret policy language, especially for edge cases. Legal/compliance-sensitive policies should be reviewed by someone who knows the intent behind the wording.
Measurement
- ➢Time to create job descriptions
- ➢Interview note completion rate
- ➢Employee questions about policies (should decrease)
Customer Support and Success
Support work is repetitive by nature - the same questions come up constantly. AI handles the repetition while you handle the edge cases and relationship building.
The Wins
Draft responses with source links
Generate first-pass responses to common queries, linked to your knowledge base.
Example approach:
```
Customer question: "[paste question]"
Here's our knowledge base on this topic: [paste relevant article]
Draft a response that:
- Directly answers their question
- Quotes the relevant section of the KB
- Links to the full article
- Offers to help further if needed
Tone: Friendly but professional. Not overly casual.
```
Risk notes: AI responses can feel generic. Add a personal touch before sending. Also verify the KB link is correct - AI can hallucinate URLs.
Call notes into next steps
Transform messy call notes into structured action items.
Example approach:
```
Here are my notes from a call with [customer].
Summarise into:
- Key issues raised
- Commitments we made (who, what, by when)
- Follow-up actions for us
- Follow-up actions for them
- Next scheduled touchpoint
```
Risk notes: AI might miss implied commitments or misunderstand context. Review carefully, especially anything that becomes a customer-facing promise.
Triage missing info before escalation
Before escalating a ticket, identify what information is missing.
Example approach:
```
Here's a support ticket that needs escalation.
Before I escalate, identify:
- What information is missing that the next tier will need?
- What troubleshooting steps should have been done?
- What clarifying questions should I ask the customer first?
```
Risk notes: This reduces unnecessary escalations but requires you to actually gather the missing info before escalating.
Pattern identification across tickets
Spot recurring issues from a batch of tickets.
Example approach:
```
Here are the last 50 tickets from this week.
Identify:
- The top 5 most common issues
- Any new issues that appeared this week
- Any issues that spiked in frequency
- Suggested knowledge base articles to create/update
```
Risk notes: AI pattern recognition is only as good as the data. Messy ticket categorisation = messy patterns.
Measurement
- ➢First response time
- ➢Escalation rate (should decrease)
- ➢Customer satisfaction scores
- ➢Tickets per resolved issue
Sales and Marketing
Sales and marketing generate enormous amounts of content - emails, proposals, campaigns, collateral. AI handles the first drafts; you add the insight and relationship context.
The Wins
First-pass briefs and positioning
Turn rough notes into structured creative briefs.
Example approach:
```
We're launching [product/feature/campaign].
Key details:
- ➢Target audience: [who]
- ➢Problem it solves: [what]
- ➢Key differentiator: [why us]
- ➢Desired action: [what we want them to do]
Generate a creative brief covering:
- Background and context
- Objectives
- Target audience profile
- Key messages (3 max)
- Tone and style guidance
- Deliverables needed
- Timeline
```
Risk notes: Briefs set direction for everything downstream. A weak brief creates weak work. Review carefully and add strategic nuance AI can't know.
Headline variants with test plan
Generate multiple headline options and a simple testing framework.
Example approach:
```
We need headlines for [campaign/email/page].
The goal is: [conversion/awareness/clicks]
The audience is: [who]
The key benefit is: [what]
Generate:
- 5 headline options with different angles
- For each, note the psychological lever (curiosity, fear, benefit, social proof)
- A simple A/B test plan: which 2 to test first, what to measure, sample size needed
```
Risk notes: AI headlines can be generic or clichéd. Push for specificity. Also, statistical significance in A/B tests requires more rigour than AI typically suggests.
Proposal summaries by segment
Customise proposal executive summaries for different audiences.
Example approach:
```
Here's our standard proposal for [service].
Generate three versions of the executive summary for:
- Technical decision-maker (focus: implementation, integration, security)
- Financial decision-maker (focus: ROI, cost, risk)
- Executive sponsor (focus: strategic value, competitive advantage)
Keep each under 200 words.
```
Risk notes: Customisation is good; misrepresentation is not. All versions should be accurate - just emphasised differently.
Follow-up email sequences
Draft follow-up emails for different scenarios.
Example approach:
```
Generate a 3-email follow-up sequence for a prospect who:
[went quiet after demo / asked for pricing / said "not right now"]
Each email should:
- Be under 100 words
- Provide value, not just "checking in"
- Have a clear, low-friction ask
- Be spaced [3 days / 1 week / 2 weeks] apart
```
Risk notes: Follow-up sequences can feel automated and impersonal. Customise based on what you actually know about the prospect.
Measurement
- ➢Time to produce first drafts
- ➢Open/click rates on AI-assisted emails
- ➢Proposal win rates
- ➢Content production velocity
Operations and General Management
Ops roles touch everything - planning, coordination, documentation, communication. AI helps you spend less time on the glue work and more time on the decisions.
The Wins
Meeting notes into actions
Transform meeting recordings or notes into structured summaries.
Example approach:
```
Here are the notes from [meeting name].
Summarise into:
- Key decisions made
- Action items (who, what, by when)
- Open questions to resolve
- Topics deferred to next meeting
- Next meeting date/agenda items
```
Risk notes: AI can miss context and subtext. It'll capture what was said, not what was meant. Review action items carefully.
Process documentation
Turn your knowledge of a process into documentation someone else can follow.
Example approach:
```
I'm going to describe how we [process name]. Ask me clarifying questions, then write it up as a step-by-step procedure.
The audience is: [new employee / contractor / backup person]
They need to be able to: [do it independently / know when to escalate]
```
Risk notes: Your explanation might skip steps that are obvious to you but not to a newcomer. Have someone unfamiliar with the process read the output.
Status report compilation
Aggregate updates from multiple sources into a single status report.
Example approach:
```
Here are updates from [team members / project tracks / systems].
Compile into a status report covering:
- Overall status: [Green/Yellow/Red] with 1-sentence summary
- Key accomplishments this period
- Blockers and risks
- Next period priorities
- Decisions needed from leadership
```
Risk notes: AI will synthesise what's in the inputs. If the inputs are incomplete or spin-heavy, the report will be too.
Email drafts for difficult messages
Get a starting point for communications you've been putting off.
Example approach:
```
I need to [deliver bad news / push back on a request / address a performance issue].
Context: [situation]
Relationship: [colleague / direct report / stakeholder]
Goal: [what outcome I want]
Draft a message that is:
- Direct but not harsh
- Focused on the issue, not the person
- Clear about what happens next
```
Risk notes: Difficult conversations deserve your voice, not AI's. Use the draft as a starting point, then rewrite in your own words.
Measurement
- ➢Time spent on status reporting
- ➢Meeting follow-up completion rate
- ➢Documentation coverage (fewer "how do I do X?" questions)
Making It Stick
Start with one task
Don't try to AI-enable everything at once. Pick one task you do weekly. Make it work. Then expand.
Keep the receipts
For each win, document:
- ➢What you did before
- ➢What you do now
- ➢Time saved
- ➢Quality change
- ➢Any issues encountered
This becomes your proof for expanding AI use and your playbook for teaching others.
Share what works
When something works, tell your team. The best AI adoption happens peer-to-peer, not top-down. A colleague saying "this saves me an hour a week" is more convincing than any mandate.
Stay skeptical
AI is a tool, not magic. Every output needs review. The time savings come from faster first drafts, not from removing human judgement.
The wins in this guide are starting points. Once you've proven value on small tasks, you'll start seeing opportunities everywhere. That's when the real transformation begins - not because someone told you to use AI, but because you've seen what it can do.
Delivered by email
This is the full resource. If you need help applying it, contact us and we will walk you through the next step.
Need help implementing this?
Tell us where you are today and we will map the smallest next step.