North Star Metric
Your North Star Metric (NSM) is the single metric that best captures the core value your product delivers to customers. It's the one number that, if it goes up, means your product is genuinely succeeding.
The key idea: Instead of tracking dozens of metrics and getting lost in dashboards, your entire team rallies around one metric that represents real customer value and predicts business growth.
How to identify your North Star:
- Ask: What value do we deliver? Not what features you have, but what progress customers make with your product
- Find the metric that measures that value - It should reflect actual usage and benefit, not just vanity metrics
- Check if it predicts revenue - Your NSM should correlate with long-term business success
- Test if it aligns your team - Everyone should understand how their work impacts this metric
Examples of North Star Metrics:
- Airbnb: Nights booked (measures marketplace value creation)
- Spotify: Time spent listening (measures engagement and value)
- Slack: Messages sent by teams (measures active collaboration)
- YouTube: Hours watched (measures content value delivered)
Simple process:
- Define your NSM based on core value delivered
- Break it down into input metrics you can influence (these become your "levers")
- Align team work to improving either the NSM or its key inputs
- Review regularly - Is the NSM moving? Are our actions connected to it?
Example in action: Let's say you run a recipe app and choose "Recipes completed" as your NSM (not just viewed, but actually cooked).
Your input metrics (levers) might be:
- New users trying their first recipe
- Returning users cooking more recipes
- Average recipes per active user
- Recipe success rate (did it turn out well?)
Now when deciding what to build:
- Add video tutorials? → Could improve recipe success rate → Increases NSM
- Social sharing feature? → Might increase new users → Increases NSM
- Premium recipe collection? → Could increase recipes per user → Increases NSM
- Fancy animations? → No clear connection to NSM → Deprioritize
Your North Star Metric becomes the filter for every product decision. If a feature doesn't plausibly move the NSM or its inputs, you question why you're building it.
You can use this framework for products of any size—from startup to enterprise. It works for B2C consumer apps, B2B SaaS tools, marketplaces, and even internal products. The principle is universal: find the one metric that represents value delivery and organize everything around it.
Why Product Managers Need a North Star Metric
You're tracking 47 metrics. Daily active users are up, but revenue is flat. Page views are climbing, but retention is dropping. Which metric should you optimize?
Without a North Star Metric, every team member optimizes for different things. Marketing chases signups. Product chases engagement. Sales chases deals. Everyone moves in slightly different directions, and the product lacks coherent strategy.
A North Star Metric solves this by answering one question: "What's the single best indicator that we're creating real value and building a sustainable business?"
This isn't about ignoring other metrics—it's about creating hierarchy. Your North Star sits at the top, representing the ultimate outcome. Everything else either drives it (input metrics) or benefits from it (output metrics like revenue).
The best product teams use their North Star Metric as:
- A decision filter (does this move the NSM?)
- An alignment tool (everyone knows what success looks like)
- A strategy compass (when lost, return to optimizing the NSM)
- A communication bridge (executives, teams, and stakeholders all understand it)
What Makes a Good North Star Metric?
Not every metric can be your North Star. A good NSM has specific characteristics:
Characteristic 1: Measures Value Delivery
Your NSM should reflect actual value customers receive, not vanity metrics or proxy indicators.
Bad NSM: Registered users (doesn't mean they got value) Good NSM: Active users completing core actions (shows they're getting value)
Bad NSM: App downloads (installation isn't value) Good NSM: Days active per month (engagement shows value)
Characteristic 2: Predicts Business Success
Your NSM should correlate with revenue and long-term sustainability. As it grows, the business should grow.
Test: If your NSM doubles but revenue stays flat, it's the wrong metric.
Characteristic 3: Actionable by the Team
Teams should understand how their work influences the NSM. If it's too abstract or influenced by factors outside your control, it won't drive behavior.
Bad NSM: Brand awareness (too abstract, hard to influence directly) Good NSM: Weekly engaged users (clear, actionable, team can impact it)
Characteristic 4: Simple and Understandable
Anyone in the company should be able to explain what it means and why it matters.
Bad NSM: Engagement coefficient weighted by cohort velocity Good NSM: Projects completed per team per month
Characteristic 5: Captures Growth Potential
Your NSM should have room to grow. If it's too narrow, you'll hit a ceiling and need a new metric.
Bad NSM: First-time user signups (becomes saturated as market matures) Good NSM: Total weekly active users (scales with market expansion)
How to Choose Your North Star Metric
Step 1: Identify Your Core Value Proposition
What progress do customers make with your product? What job are they hiring you to do?
- Notion: Organize work and knowledge
- Uber: Get reliable transportation quickly
- Netflix: Access entertainment anywhere, anytime
- Zoom: Connect face-to-face remotely
Step 2: Find the Metric That Measures That Value
What observable behavior indicates customers received that value?
- Notion: Documents created and shared
- Uber: Completed rides
- Netflix: Hours streamed
- Zoom: Meeting minutes
Step 3: Test Against Business Outcomes
Does this metric correlate with revenue and retention?
Ask: "If this metric grows 50% next quarter, will our business meaningfully improve?"
If yes → strong NSM candidate If no → it's a supporting metric, not your North Star
Step 4: Check for Team Alignment
Can every team see how their work impacts this metric?
- Engineers: Feature development influences usage
- Designers: Better UX increases completion rates
- Marketing: Acquisition brings new users
- Sales: Better targeting brings higher-quality customers who use more
- Support: Reduced friction increases engagement
Step 5: Look for Leading Indicators
Your NSM should predict future success, not just measure past activity.
Lagging: Monthly revenue (tells you what happened) Leading: Weekly active users (predicts revenue growth)
Choose leading indicators when possible.
North Star Metrics by Product Type
Consumer Social Apps:
- Instagram: Daily active users sharing or engaging with content
- TikTok: Time spent watching videos
- WhatsApp: Messages sent daily
- Pattern: Engagement and network effects matter most
B2B SaaS Products:
- Asana: Tasks completed per team per week
- HubSpot: Marketing qualified leads generated
- Salesforce: Deals closed using the platform
- Pattern: Value delivery through workflow completion
Marketplaces:
- Etsy: Gross merchandise value (GMV)
- DoorDash: Orders delivered
- Upwork: Paid hours worked through platform
- Pattern: Transaction volume showing marketplace health
Media/Content Platforms:
- Medium: Total reading time
- Substack: Paid subscriptions active
- Twitch: Hours of live content watched
- Pattern: Content consumption and creator economics
Productivity Tools:
- Evernote: Notes created and synced
- Todoist: Tasks completed
- Loom: Videos recorded and shared
- Pattern: Creation and completion of core objects
Real Company Examples with North Star Metrics
Example 1: Amplitude's "Weekly Learning Users"
Amplitude (product analytics platform) chose "Weekly Learning Users" as their NSM—users who discovered an insight from their data each week.
Why this works:
- Measures actual value (insights, not just logins)
- Predicts retention (users who learn stay longer)
- Aligns teams around making data actionable
- Leads to expansion revenue (teams that learn buy more)
Input metrics:
- New users completing first chart
- Teams adding more teammates
- Questions answered through platform
- Integrations connected
Result: Clear focus on making analytics immediately valuable, not just comprehensive.
Example 2: HubSpot's Evolution of NSM
HubSpot changed their NSM as they matured:
Early stage: Weekly active teams (focus on adoption) Growth stage: Marketing contacts managed (focus on value delivery) Scale stage: Customer revenue generated through platform (focus on ROI)
Key insight: Your NSM can evolve as your product and market position change. What matters in year 1 differs from year 5.
Example 3: Calm's "Minutes of Meditation Completed"
Calm (meditation app) chose completed meditation minutes, not just app opens or sessions started.
Why completion matters:
- Users who complete meditations experience benefits
- Experienced benefits drive retention and word-of-mouth
- Completion correlates with subscription renewal
- Focuses team on quality of content and experience
Input metrics:
- New users completing first meditation
- Session completion rate
- Users meditating multiple days per week
- Average meditation length
What they don't optimize: Total sessions (starting without completing shows low value)
Example 4: Miro's "Collaborative Boards Created"
Miro (online whiteboarding) focuses on boards with 2+ collaborators, not total boards or users.
Why collaboration matters:
- Single-user boards might mean trying but not adopting
- Collaboration creates stickiness (team dependency)
- Collaborative boards predict team expansion
- Network effects within teams drive growth
Input metrics:
- Teams inviting their first collaborator
- Boards with 5+ contributors
- Cross-team board usage
- Templates used for collaboration
Result: Product decisions prioritize collaborative features over solo-user features.
Example 5: Duolingo's "Daily Active Users Learning"
Duolingo optimizes for users who complete lessons daily, not just open the app.
Why daily learning matters:
- Language learning requires consistency
- Daily learners see progress and stay motivated
- Habit formation drives long-term retention
- Correlates with word-of-mouth growth
Input metrics:
- Streak completion rate (consecutive days)
- Lesson completion percentage
- Users setting daily goals
- Push notification response rate
Tactics aligned to NSM:
- Streak freezes (maintain habit during busy days)
- Learning reminders (encourage daily return)
- Bite-sized lessons (reduce friction to completion)
- Gamification (make daily learning rewarding)
Breaking Down Your North Star: Input Metrics
Your North Star Metric is the destination. Input metrics are the levers that get you there.
The equation approach:
Define your NSM as a simple equation of inputs you can influence.
Example: E-commerce marketplace
- NSM: Monthly orders completed
- Equation: (Active buyers) × (Average orders per buyer) × (Order completion rate)
Now you have three input metrics to optimize:
- Grow active buyers (acquisition and activation)
- Increase orders per buyer (engagement and retention)
- Improve completion rate (checkout optimization)
Example: B2B collaboration tool
- NSM: Weekly active collaborating teams
- Equation: (Teams onboarded) × (Team activation rate) × (Retention rate)
Input metrics:
- Teams onboarded (top of funnel)
- Team activation (first real collaboration)
- Retention rate (teams staying active)
The benefit: Different teams can own different inputs, all contributing to the North Star.
Common North Star Metric Mistakes
Mistake 1: Choosing a Vanity Metric
Vanity metrics look impressive but don't reflect value or predict success.
Red flags:
- Registered users (doesn't mean active or getting value)
- Page views (doesn't mean engagement or retention)
- Downloads (doesn't mean usage)
- Social media followers (doesn't mean business impact)
Fix: Ask "If this metric doubles but our revenue stays flat, would we be happy?" If no, it's vanity.
Mistake 2: Making It Too Complicated
If your team can't explain the NSM in one sentence, it won't drive behavior.
Bad: "Monthly active users weighted by engagement intensity and adjusted for cohort maturation" Good: "Monthly active users who complete core actions"
Fix: Simplify until anyone can understand and explain it.
Mistake 3: Optimizing for the Metric, Not the Value
Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
Example: If your NSM is "messages sent," teams might add auto-notifications that technically count as messages but create zero value (or negative value).
Fix: Pair NSM with quality guardrails. "Messages sent by humans in active conversations" is harder to game.
Mistake 4: Not Connecting It to Revenue
Your NSM should predict revenue growth. If it doesn't, you're optimizing the wrong thing.
Test: Plot NSM growth vs. revenue growth over time. They should correlate. If NSM goes up and revenue stays flat, investigate why.
Fix: Validate correlation before committing. Run historical analysis or run small experiments.
Mistake 5: Setting It and Forgetting It
Your NSM isn't permanent. As your product matures, markets change, or strategy evolves, your NSM might need to evolve too.
Fix: Review your NSM annually. Ask: "Does this still represent our core value? Does it still predict success?"
Mistake 6: Having Multiple North Stars
"Our North Star is engagement and revenue and growth and retention."
That's not a North Star—that's a constellation. You can't optimize for everything simultaneously.
Fix: Choose one. The others become supporting metrics or inputs. Be disciplined about hierarchy.
How to Implement Your North Star Metric
Week 1: Define and Socialize
Day 1-2: Choose your NSM using the framework above Day 3-4: Present it to leadership and key stakeholders, explain the reasoning Day 5: Share with entire team in all-hands or team meeting
Template for communication: "Our North Star Metric is [METRIC]. This measures [VALUE DELIVERED]. When this goes up, it means [BUSINESS OUTCOME]. Everyone contributes by [HOW THEIR WORK CONNECTS]."
Week 2: Build Measurement Infrastructure
Set up tracking:
- Dashboard showing NSM prominently
- Historical trend (how we've performed)
- Current state (where we are today)
- Target trajectory (where we're heading)
Make it visible:
- Email reports to leadership weekly
- Dashboard accessible to all teams
- Regular all-hands updates on NSM performance
Week 3: Map Input Metrics
Break down your NSM:
- What drives it?
- What can teams directly influence?
- Which inputs have the biggest leverage?
Assign ownership:
- Each input metric has an owner
- Owners propose initiatives to move their input
- Initiatives tie back to NSM impact
Week 4: Align Roadmap and OKRs
Review current work:
- Which initiatives clearly move NSM or inputs?
- Which initiatives have unclear NSM connection?
- Are we spending time on the right things?
Realign priorities:
- Double down on high-NSM-impact work
- Deprioritize or kill low-NSM-impact work
- Fill gaps where NSM needs support
Using Your North Star Metric for Decisions
Feature prioritization:
For each proposed feature, ask: "How does this impact our NSM?"
- Direct impact: Feature increases core action completion → Build it
- Indirect impact: Feature improves quality, leading to retention, increasing NSM → Consider it
- No clear impact: Feature is nice-to-have but doesn't move NSM → Deprioritize
Example:
- NSM: Weekly active projects created
- Feature request: Dark mode
- NSM impact: Unclear—doesn't directly increase project creation
- Decision: Put in backlog, prioritize features that drive project creation
Resource allocation:
When deciding where to invest team time, use NSM as your guide.
Ask: "Which initiative has the highest expected NSM impact per engineer week?"
This creates a rough ROI framework focused on your key metric.
Experiment design:
Every experiment should have a hypothesis about NSM impact.
Template: "We believe [CHANGE] will increase [INPUT METRIC] by [X%], leading to [Y%] improvement in NSM because [REASONING]."
Success criteria: If experiment improves input metric and NSM moves positively, scale it. If input moves but NSM doesn't, investigate why.
Team goals and OKRs:
Frame team objectives around NSM and its inputs.
Example for engineering team:
- Objective: Increase Weekly Active Users (NSM)
- Key Result 1: Improve app load time to under 2 seconds (quality improvement)
- Key Result 2: Ship collaboration features increasing multi-user sessions by 20%
- Key Result 3: Reduce crash rate below 0.5% (retention protection)
North Star Metric vs. Other Frameworks
NSM vs. OKRs:
- NSM: The ultimate outcome you optimize for continuously
- OKRs: Time-bound objectives and measurable key results
- Relationship: Your NSM often appears in your top-level OKRs
NSM vs. Pirate Metrics (AARRR):
- Pirate Metrics: Funnel framework (Acquisition, Activation, Retention, Revenue, Referral)
- NSM: Single metric representing value
- Relationship: Your NSM is usually in the Activation or Retention stage
NSM vs. KPIs:
- KPIs: Set of important metrics across different areas
- NSM: The single most important metric
- Relationship: NSM is your #1 KPI; others are supporting KPIs
NSM vs. Goals:
- Goals: What you want to achieve
- NSM: How you measure achievement
- Relationship: Your NSM should reflect progress toward strategic goals
Evolving Your North Star Over Time
Your North Star Metric can change as your product matures:
Stage 1: Early/Finding PMF Focus: Activation and value delivery NSM Example: Users completing core action
Stage 2: Growth Focus: Scaling usage and retention NSM Example: Weekly active users
Stage 3: Maturity Focus: Engagement depth and monetization NSM Example: Revenue per user or value delivered per customer
Stage 4: Platform/Ecosystem Focus: Network effects and ecosystem health NSM Example: Total ecosystem value creation
When to change your NSM:
- Current NSM is plateauing or maxing out
- Strategy fundamentally shifts (new market, business model)
- Current NSM no longer predicts business success
- Product has evolved significantly
How to change:
- Don't change frequently (creates confusion)
- When changing, explain why clearly
- Run both metrics in parallel for 1-2 quarters
- Fully commit to new NSM once transition is complete
The Bottom Line
Your North Star Metric is your product's true north—the single number that, above all others, indicates you're creating value and building a sustainable business.
It's not about ignoring other metrics. It's about creating clarity. When everyone understands the one metric that matters most, decisions become simpler, alignment becomes easier, and strategy becomes clearer.
Three principles for NSM success:
- Choose wisely: Your NSM should measure value delivery, predict revenue, and be actionable by teams.
- Commit fully: Once chosen, organize everything around it—roadmaps, OKRs, team goals, and daily decisions.
- Review regularly: Check that NSM movement correlates with business success. If correlation breaks, investigate and adjust.
Start by asking: "What's the one metric that best represents the value we deliver to customers?"
Once you have the answer, make it impossible to ignore. Put it in every dashboard, every all-hands, every strategic discussion. Make it the first thing you check in the morning.
Because when you're clear about what success looks like, achieving it becomes significantly easier.
What's your North Star?
Quick Reference Card
Definition: The single metric that best captures the core value your product delivers to customers.
Characteristics of a Good NSM:
- Measures value delivery (not vanity)
- Predicts business success
- Actionable by teams
- Simple and understandable
- Has room to grow
How to Choose:
- Identify core value proposition
- Find metric measuring that value
- Test correlation with revenue
- Check team alignment
- Verify it's a leading indicator
Common Examples:
- Consumer: Time spent, content created, actions completed
- B2B SaaS: Active teams, workflows completed, value generated
- Marketplace: Transactions, GMV, hours booked
- Media: Content consumed, subscriptions active
Red Flags:
- Too complicated to explain
- Doesn't correlate with revenue
- Vanity metric (looks good but meaningless)
- Multiple "North Stars" (defeats the purpose)
Remember: Break your NSM into input metrics that teams can directly influence. The NSM is the destination; inputs are the levers.
Related Tools
Reinforcing Feedback Loops
A reinforcing feedback loop (also called a virtuous cycle or positive feedback loop) is when an action creates results that amplify the original action, creating exponential growth over time. The basic concept: Output feeds back as input, creating a cycle that strengthens itself. Simple formula: Action A → Result B → Result B makes Action A stronger → More of Action A → Even more of Result B → Cycle continues Everyday example: A snowball rolling downhill Snow sticks to ball → Ball gets bigger → Bigger ball picks up more snow → Gets even bigger → Picks up even more snow How to identify feedback loops in products: Map the cycle: What action leads to what result? Find the feedback: Does that result encourage more of the original action? Check for amplification: Does each cycle make the next cycle stronger? Look for compounding: Does the effect grow exponentially, not linearly? Common product feedback loops: Network effects: More users join → More valuable the product → Even more users join → Even more valuable Content loops: Users create content → Attracts more users → More users create more content → Attracts even more users Data improvement loops: More usage → Better data → Better product → More usage → Even better data Reputation loops: Good product → Happy customers → Positive reviews → More customers → More success stories Simple example in action: Let's say you build a restaurant review app. Initial state: You have 100 restaurants and 1,000 users The loop starts: Users write reviews → Restaurants get more visibility More restaurants join to get discovered → More restaurant options More restaurants → Attracts more users (better selection) More users → More reviews written More reviews → Better data quality and trust Better quality → Even more users join More users → Even more restaurants want to join Cycle repeats, each time stronger After 6 months: 1,000 restaurants, 10,000 users (10x growth) After 12 months: 5,000 restaurants, 50,000 users (exponential) The key insight: You didn't need to manually add every restaurant or recruit every user. The loop fed itself. Initial effort created a self-reinforcing system. How to design products with feedback loops: Identify the core action that creates value (e.g., posting content, making connections, completing tasks) Find what makes that action more valuable over time (more content, more connections, better insights) Design the product so results encourage more action (notifications, incentives, visibility) Remove friction from completing the loop (make it easy to do the action again) Measure loop velocity - how fast do users complete the cycle? You can apply this to any product type—B2C apps, B2B tools, marketplaces, SaaS platforms, even internal products. The principle is universal: design systems where success breeds more success. Why Product Managers Need to Understand Feedback Loops Most products grow linearly: you add resources (money, people, features), you get proportional growth. Double your marketing spend, double your users. Hire two more engineers, ship twice as many features. Reinforcing feedback loops create exponential growth: the same input generates increasing output over time. You spark the loop, and it accelerates itself. This is how products achieve escape velocity—they reach a point where growth becomes self-sustaining, where each user or action makes the product more valuable, attracting more users who create more value. Understanding feedback loops helps you: Design products that compound in value instead of requiring constant resource injection Identify moats that competitors can't easily cross (established loops are hard to replicate) Spot where growth is stalling (which loop is broken or slowing down?) Make strategic decisions about where to invest (accelerate the loop vs. add new features) Predict long-term outcomes (small advantages in loop velocity create massive advantages over time) The best products aren't just good—they get better the more people use them. That's not accident. That's intentional feedback loop design. What Makes a Reinforcing Feedback Loop? A true reinforcing feedback loop has four essential elements: Element 1: The Core Action The behavior you want users to repeat. This should create direct value. Examples: Posting content Inviting teammates Completing transactions Sharing results Adding data Element 2: The Value Increase The action must make the product more valuable for others or for future use. Examples: More content → More reasons to visit More users → More network value More data → Better recommendations More transactions → Better matching Element 3: The Motivation to Return Increased value must give users reason to take the action again (or attract new users to take it). Examples: Better content → More engagement → More content creation More connections → More reasons to stay active Better recommendations → More usage → More data → Even better recommendations Element 4: Compounding Effect Each cycle must be stronger than the last. Linear growth isn't a feedback loop—exponential growth is. Test: If doubling the action only doubles the value, it's linear. If doubling the action more than doubles the value, you have a loop. Types of Reinforcing Feedback Loops in Products Type 1: Network Effects (Direct) Value increases directly with number of users. Formula: More users → More valuable to each user → Attracts more users Examples: WhatsApp: More contacts on platform → More useful → More people join to connect LinkedIn: More professionals → Better networking → More professionals join Zoom: More people using it → Easier to schedule meetings (everyone has it) → More adoption How to accelerate: Reduce friction to invite others, create FOMO for non-users, make single-player mode weak (force network value) Type 2: Data Network Effects Product improves through accumulated usage data. Formula: More usage → Better data → Better product → More usage Examples: Spotify: More listening → Better recommendations → More engagement → More listening data Google Maps: More drivers → Better traffic data → Better routes → More drivers use it Grammarly: More writing → Better AI corrections → More accurate → More writers use it How to accelerate: Make improvements visible to users, faster feedback cycles, show personalization benefits Type 3: Content/Supply Loops User-generated content attracts more users who generate more content. Formula: More content → Attracts more users → Users create more content → Attracts even more users Examples: YouTube: More videos → More viewers → More creators make videos → Even more content Reddit: More discussions → More readers → More contributors → More discussions Medium: More articles → More readers → More writers publish → More articles How to accelerate: Reward content creators with visibility/money, reduce friction to create, improve discovery Type 4: Marketplace Liquidity Loops More supply attracts demand, more demand attracts supply. Formula: More sellers → More options for buyers → More buyers → Attracts more sellers Examples: Airbnb: More hosts → Better selection → More guests → More revenue for hosts → More hosts join Uber: More drivers → Faster pickup → More riders → More demand for drivers → More drivers join Etsy: More sellers → More unique products → More shoppers → More sales opportunity → More sellers How to accelerate: Balance both sides carefully, reduce friction for underserved side, create density in geographic/category pockets Type 5: Viral Loops Users invite others as part of using the product. Formula: User A invites User B → User B uses product → User B invites User C → Exponential growth Examples: Dropbox: Share folder → Recipient needs Dropbox → Recipient signs up → Shares their own folders Calendly: Send meeting link → Recipient experiences ease → Recipient adopts Calendly Loom: Share video → Recipient sees value → Recipient creates account to make videos How to accelerate: Make sharing core to product (not optional), show value immediately to recipients, reduce signup friction Type 6: Reputation/Credibility Loops Success creates reputation, reputation creates more success. Formula: Good results → Testimonials/case studies → Attracts better customers → Better results → Stronger reputation Examples: Stripe: Powers major companies → "Used by Shopify, Lyft" → More startups trust it → Powers more major companies Figma: Design teams at top companies use it → "Industry standard" perception → More companies adopt → Strengthens position Superhuman: Exclusive/high-performing users → Premium brand → Attracts similar users → Maintains premium positioning How to accelerate: Make success visible, create case studies, build exclusivity/status into product Real Company Examples of Feedback Loops Example 1: Notion's Template Loop Notion built a powerful reinforcing loop around templates and community content. The loop: Users create useful templates → Share with community Templates attract new users searching for solutions New users customize templates → Create their own versions Best templates get featured → Original creators gain following Creators make more templates → Even more variety More templates → Notion becomes "go-to" for any use case "Go-to" status → More users join → More templates created Result: Notion's template gallery became a growth engine. Users solved their own discovery problem and recruited new users. Key insight: They didn't create all templates themselves—they designed a system where users expanded the value for each other. Example 2: Figma's Collaborative Design Loop Figma's multiplayer features created a feedback loop traditional design tools couldn't match. The loop: Designer uses Figma → Invites teammates for feedback Teammates see design in real-time → Experience "wow" moment Teammates adopt Figma for their projects → Invite more people More people on Figma → Easier to collaborate Collaboration becomes standard → Files stay in Figma More files in Figma → Harder to switch away (lock-in) Team growth → More seats purchased → More revenue Acceleration factors: Free tier for individuals (reduced friction) Real-time cursor visibility (showcased collaboration magic) Commenting and feedback tools (made collaboration valuable) Easy sharing links (viral distribution) Result: Grew from startup to $20B acquisition by Adobe, primarily through collaborative feedback loops. Example 3: Duolingo's Engagement Loop Duolingo engineered multiple reinforcing loops around daily learning habits. Primary loop: User learns daily → Builds streak Streak becomes valuable (psychological investment) User motivated to maintain streak → Returns next day Longer streak → Higher commitment → Less likely to break Daily learning → Visible progress → More motivation Progress milestones → Sharing on social → Brings new users New users start their own streaks → Cycle continues Supporting loops: Leaderboards → Competition with friends → More engagement → Better data → Better curriculum → More engagement Push notifications → Bring users back → Complete lessons → Notification timing improves → Better effectiveness Result: 30%+ daily active user rate—extraordinary for an education app. Loops created habit formation at scale. Example 4: Superhuman's Referral Scarcity Loop Superhuman created a feedback loop through controlled access and referral mechanics. The loop: Waitlist creates scarcity → Exclusivity perception Exclusive users get "insider" status → Share to demonstrate status Referral invites are limited → Makes invitations valuable Invited users go through onboarding → High-quality user base High-quality users → Great case studies → More desirability More desirability → Longer waitlist → More exclusivity More exclusivity → Higher willingness to pay → Better revenue Key design choices: Mandatory onboarding call (filtered users, ensured quality) Limited referrals (made invitations valuable) High price point (reinforced premium positioning) Result: Sustained 10,000+ person waitlist, 90%+ retention, premium pricing accepted. Example 5: Airtable's Template + Integration Loop Airtable combined template creation with integrations to create compounding value. The loop: Users build databases for specific workflows → Create templates Templates shared → Attract users with similar needs More users → More feature requests → More integrations built More integrations → More powerful workflows possible More possibilities → More templates created More templates → "Airtable can do anything" perception Broader use cases → More diverse users → Even more templates Acceleration through ecosystem: Template marketplace made discovery easy Integrations with other tools expanded use cases API enabled custom solutions Community showcased creative uses Result: Evolved from "spreadsheet alternative" to "workflow platform" through feedback loops, not just features. How to Identify Feedback Loops in Your Product Step 1: Map Your Core User Actions List the key behaviors users perform: Creating content Inviting others Making transactions Sharing results Adding data Giving feedback Step 2: Trace the Impact For each action, ask: "What happens next?" Does it create value for other users? Does it improve the product? Does it create reasons to return? Does it attract new users? Step 3: Look for Cycles Find where output feeds back as input: Better product → More usage → Better product More users → More value → More users More content → More visitors → More content Step 4: Test for Amplification True feedback loops amplify over time: Is cycle 10 stronger than cycle 1? Does early advantage compound? Would doubling the action more than double the result? Step 5: Measure Loop Velocity How fast do users complete the cycle? Faster loops = faster growth Remove friction at each step Incentivize loop completion Designing Products with Feedback Loops Principle 1: Make the Core Action Valuable The action that starts your loop must create immediate value, or users won't complete it. Bad: "Create a profile" (no immediate value) Good: "Post your first job and get applications" (immediate value) Principle 2: Reduce Friction in the Loop Every point of friction slows the loop. Smooth the path. Example - Dropbox: Friction: Sharing files requires email, download, reply Reduction: One link, instant access, automatic sync Result: Sharing becomes trivial, loop accelerates Principle 3: Make Benefits Visible Users need to see that the product is improving or becoming more valuable. Tactics: "Your recommendations are getting better" (Spotify) "Your network has grown to 500 connections" (LinkedIn) "Your team completed 100 projects this month" (Asana) Principle 4: Create Triggers for Re-engagement Don't wait for users to remember. Bring them back into the loop. Examples: Notifications when someone interacts with your content Emails showing what you missed Reminders of streaks or progress Prompts when value has accumulated Principle 5: Reward Early Contributors First users who create value should benefit disproportionately. Why: Creates incentive to start the loop even when network is small Examples: Early Airbnb hosts got priority in search Early YouTube creators got partnership opportunities Early Reddit users accumulated high karma Early crypto miners got coins cheapest Principle 6: Design for Compounding Each cycle should make the next cycle easier or more valuable. Test questions: Do first 100 users make user 101 more valuable? Does the 1000th piece of content attract more users than the 100th? Is retention improving over time as loops mature? Common Feedback Loop Mistakes Mistake 1: Confusing Feedback Loops with Growth Tactics A growth tactic gets you users once. A feedback loop gets you users continuously. Not a loop: SEO blog posts (one-time traffic) Is a loop: User-generated content that ranks in SEO → Brings users who create more content Fix: Look for cycles where output feeds back as input. Mistake 2: Designing Loops That Don't Actually Reinforce You think you have a loop, but it's actually linear. Fake loop: "Good product → Happy customers → Referrals" Stop there, and it's linear. Each customer refers once, then stops. Real loop: "Good product → Happy customers → Referrals → More users → More use cases discovered → Product improves → Even happier customers" Fix: Ensure output genuinely amplifies input, creating exponential effect. Mistake 3: Ignoring Loop Velocity A slow loop loses to a fast loop, even if the slow loop is "better." Example: Product A: Loop completes in 1 week Product B: Loop completes in 1 day Product B completes 7 loops while Product A completes 1. Product B compounds faster. Fix: Measure and optimize time-to-complete-loop. Remove friction at every step. Mistake 4: Breaking Loops with Monetization You discover a loop, then ruin it by charging for the loop action. Example: Free users could invite others (loop worked) → Changed to paid-only invites → Loop broke Fix: Monetize around loops, not within them. Let the loop run freely; charge for premium features. Mistake 5: Building Loops That Don't Scale Some loops work at 100 users but break at 10,000. Example: Manual curation of user content works early but doesn't scale. Need algorithmic curation for loops to continue at scale. Fix: Design loops that strengthen with scale, not weaken. Mistake 6: Neglecting Cold Start Problem Loops need initial momentum. If the first 100 users get no value, they won't start the loop. Fix: Solve cold start explicitly: Seed initial content/supply Create single-player mode (value without network) Focus on dense pockets (one city, one university, one niche) Measuring and Optimizing Your Feedback Loops Metric 1: Loop Completion Rate What percentage of users complete the full cycle? Start action → See value → Take action again Example: Users who post once → Get engagement → Post again Target: Increase completion rate (more users participating in loop) Metric 2: Time to Complete Loop How long from action to result to next action? Faster loops = more cycles = more compounding Example: Time from "post content" → "get feedback" → "post again" Target: Reduce loop time (accelerate cycles) Metric 3: Loop Frequency How often does each user complete the cycle? Daily loops compound faster than monthly loops Example: Daily active users vs. monthly active users Target: Increase frequency (more cycles per user) Metric 4: Value Added Per Cycle How much does each cycle improve the product? Better data, more content, stronger network Example: Recommendation accuracy improving with each usage cycle Target: Increase value-add per cycle (stronger amplification) Metric 5: Cohort Retention Over Cycles Do users who complete more cycles retain better? Compare retention of users who complete 1 vs. 5 vs. 10 cycles Should see improving retention with more cycles Target: Stronger retention improvement with cycle count Optimization strategies: Identify bottleneck in loop: Where do users drop out? Fix that step. A/B test friction reduction: Remove obstacles at each stage. Experiment with incentives: What motivates loop completion? Improve feedback visibility: Show users the loop is working. Optimize timing: When's the best moment to re-engage? Balancing Reinforcing and Balancing Loops Not all loops should reinforce. Sometimes you need balancing loops (negative feedback) to prevent runaway problems. When you need balancing loops: Problem: Virality brings low-quality users Balancing loop: Quality controls → Reduce spam → Maintain high-quality community → Attracts quality users Problem: Popular content dominates, new content gets no visibility Balancing loop: Boost new content → Gives it chance → Diversifies ecosystem → Prevents stagnation Problem: Power users overwhelm beginners Balancing loop: Segment by skill level → Beginners not intimidated → Stay longer → Eventually become power users The goal: Reinforce what you want to grow, balance what you want to stabilize. The Bottom Line Reinforcing feedback loops are the difference between products that require constant effort to grow and products that gain momentum and grow themselves. Linear growth requires constant fuel—more marketing spend, more sales reps, more content creation. Exponential growth through feedback loops requires initial effort to start the cycle, then the cycle fuels itself. The best products don't just serve users—they create systems where serving users makes serving more users easier and more valuable. Three steps to leverage feedback loops: Identify existing loops: Where does output feed back as input in your product? Map the cycles. Optimize loop velocity: How can you make cycles faster? Remove friction at every step. Design new loops: What actions could create reinforcing cycles? Build them into your product deliberately. Start by mapping one feedback loop in your product this week. Trace the cycle from action to value to reinforcement. Measure how long it takes. Find one way to speed it up or strengthen it. Because in product management, the most powerful strategy isn't working harder—it's designing systems that work harder for you. What feedback loops are you building? Quick Reference Card Definition: A cycle where output feeds back as input, creating self-amplifying growth. Formula: Action → Result → Result strengthens Action → More Action → Stronger Result → Cycle continues Four Essential Elements: Core action (what users do) Value increase (action makes product better) Motivation to return (value brings users back) Compounding effect (each cycle stronger than last) Common Loop Types: Network effects (more users → more value) Data loops (more usage → better product) Content loops (more content → more users) Marketplace loops (more supply ↔ more demand) Viral loops (users invite others) Reputation loops (success → credibility → more success) Key Metrics: Loop completion rate Time to complete loop Loop frequency per user Value added per cycle Retention improvement over cycles Remember: Design products where success breeds more success. The strongest moats are reinforcing feedback loops competitors can't replicate.
Eisenhower Matrix
Why Product Managers Need the Eisenhower Matrix You're drowning in Slack messages, stakeholder requests are piling up, and your roadmap is bursting with features everyone swears are "critical." Sound familiar? The Eisenhower Matrix—named after President Dwight D. Eisenhower who famously said, "What is important is seldom urgent, and what is urgent is seldom important"—is your escape route. This deceptively simple 2x2 framework helps product managers cut through noise and focus on work that actually moves the needle. Unlike complex prioritization scoring systems, the Eisenhower Matrix takes minutes to learn and seconds to apply. It's the thinking framework that helps you say "no" with confidence and "yes" to the right things. Understanding the Four Quadrants The matrix divides all tasks into four categories based on two dimensions: urgency and importance. Quadrant 1: Urgent + Important (DO FIRST) These are your genuine fires—production outages, critical bugs affecting revenue, time-sensitive compliance issues. For PMs, this might include a payment gateway failure or responding to a major customer churn risk. PM Reality Check: Most people think this quadrant should be full. If yours is, you're in reactive mode. Great PMs keep this quadrant as empty as possible. Quadrant 2: Not Urgent + Important (SCHEDULE) This is where magic happens. Strategic planning, user research, competitor analysis, roadmap refinement, team development, and building stakeholder relationships all live here. This quadrant builds your product's future. The PM Sweet Spot: Top performers spend 60-70% of their time here. Block calendar time for this work before the week starts. Quadrant 3: Urgent + Not Important (DELEGATE) These tasks scream for attention but don't require your unique skills. Status update requests, routine data pulls, meeting notes, and some stakeholder questions fit here. Delegation Strategy: Train your team, create self-service resources, or automate. That "urgent" report request? Teach stakeholders to access the dashboard themselves. Quadrant 4: Not Urgent + Not Important (ELIMINATE) The time-wasters: excessive meeting attendance, rabbit-hole research with no clear goal, compulsive Slack checking, and perfectionism on low-impact deliverables. Truth Bomb: We do these because they feel productive. They're not. Ruthlessly eliminate them. How to Apply It: A Product Manager's Playbook Step 1: Brain Dump (5 minutes) List everything competing for your attention this week. Feature requests, meetings, research tasks, stakeholder asks—everything. Step 2: Plot and Question (10 minutes) Place each item in a quadrant. Then challenge yourself: Is this stakeholder request truly important, or just loudly urgent? Will this feature move key metrics, or does it just feel good to build? Am I doing this because I should, or because it's comfortable? Step 3: Act Decisively Quadrant 1: Do today Quadrant 2: Calendar block it for this week (make it non-negotiable) Quadrant 3: Delegate with clear instructions Quadrant 4: Delete, decline, or defer indefinitely Real Product Scenarios Sorted Scenario: The CEO's "Quick Feature" Request Seems like: Q1 (urgent + important) Actually might be: Q2 or Q3. Ask: "What problem does this solve? What's the impact if we delay two weeks?" Often becomes a scheduled strategy discussion. Scenario: Weekly Metrics Review Seems like: Q3 (urgent routine) Actually should be: Q2 (important strategic ritual). This isn't administrative—it's how you spot trends and make better decisions. Scenario: Refining Persona Documentation Seems like: Q4 (nice-to-have) Actually is: Q2 (important foundation). Good personas prevent wasted development cycles later. Scenario: Firefighting a Bug That Affects 2% of Users Context matters: If those 2% are enterprise customers representing 40% of revenue? Q1. If they're free-tier users with a simple workaround? Q3 or even Q4. Three Common PM Traps (And How to Avoid Them) Trap 1: Living in Quadrant 1 You're always fighting fires, never preventing them. Your calendar owns you. Fix: Spend 2 hours every Friday in Q2 work. Do the strategic thinking that prevents next week's fires. Schedule user research. Review your roadmap assumptions. Build the process that eliminates recurring issues. Trap 2: Confusing Urgent with Important A stakeholder's urgent request isn't automatically important. Noise is often just... loud. Fix: Pause and ask: "If I delay this 48 hours, what actually breaks?" The answer reveals true priority. Create a "decision filter" based on your product goals and reference it before committing. Trap 3: Quadrant 3 Guilt You feel bad delegating because you're "available" or it's "faster to do it yourself." Fix: Calculate the cost. If a task takes you 30 minutes monthly, that's 6 hours yearly. Training someone takes 2 hours once. You just bought back 4 hours for Q2 work. Delegation is a strategic investment. Weekly Eisenhower Ritual (15 Minutes) Make this your Sunday evening or Monday morning routine: List: Write down everything on your plate Quadrant: Assign each item (be honest, not aspirational) Block: Schedule Q2 work first—protect this time fiercely Limit Q1: If you have more than 3-4 items here, something's wrong with your system Purge Q4: Delete at least two low-value activities from your week Leveling Up: Matrix + Impact Once you've mastered the basic matrix, layer in impact assessment. Within each quadrant, rank items by potential impact on your north star metric. This creates a prioritized list within priorities: Q1: Do the urgent-important tasks with highest impact first Q2: Schedule the important work that moves your key metrics most Q3: Delegate high-volume tasks before one-offs Q4: Eliminate the biggest time-wasters first The Bottom Line The Eisenhower Matrix won't make your job easier—it will make it clearer. You'll still have hard choices, but you'll make them consciously instead of reactively. Great product management isn't about doing more things. It's about doing the right things. This framework helps you identify what "right" means for your product, your team, and your career. Start small: use it for one week. Plot your tasks each Monday. By Friday, you'll see which quadrant you naturally gravitate toward—and where you need to shift your focus. Because at the end of the day, the best product managers don't just manage priorities. They create them. Quick Reference Card DO FIRST (Q1): Critical bugs, production issues, imminent deadlines, genuine crises SCHEDULE (Q2): Strategy, research, roadmap planning, team development, preventing future fires DELEGATE (Q3): Status requests, routine reporting, administrative tasks, non-PM work ELIMINATE (Q4): Low-value meetings, busy work, excessive polish, scope creep features
Jobs to Be Done
Why Product Managers Need Jobs to Be Done Your analytics say users want feature X. Your surveys confirm it. You build it. Adoption is... disappointing. Why? Because you asked what users want, not why they want it. Jobs to Be Done (JTBD) is a framework that shifts your focus from demographics, personas, and feature requests to the fundamental question: "What job is the customer trying to get done?" People don't buy products—they "hire" them to make progress in their lives. A commuter doesn't buy coffee; they hire coffee to stay alert during a boring drive. An executive doesn't buy project management software; they hire it to look in control during board meetings. When you understand the job, you stop competing on features and start competing on how well you help customers make progress. This is how products become indispensable. What Is Jobs to Be Done? Jobs to Be Done is a framework for understanding customer motivation through the lens of progress. Core idea: Customers don't want your product. They want to make progress in a specific circumstance. Your product is just the tool they hire to get that job done. The famous example: A fast-food chain wanted to sell more milkshakes. Traditional research asked: "How can we improve our milkshakes?" (better taste, lower price, more flavors). JTBD research asked: "What job are people hiring milkshakes to do?" Discovery: 40% of milkshakes were bought before 8 AM by solo commuters. The job? Keep me occupied and full during my boring morning commute without making a mess. Competitors weren't other milkshakes—they were bananas (too quick to eat), bagels (messy, need two hands), and Snickers bars (gone in three bites, then still hungry). Solution: Make thicker milkshakes that last the whole commute, add fruit chunks for texture variation, make them easier to buy with a pre-paid card system. Result: Sales increased significantly—not by making "better" milkshakes, but by doing the job better than alternatives. The Simple JTBD Explanation Using Jobs to Be Done can be a purely mental exercise or you can map it out systematically. Start with a customer action—someone bought your product, used a feature, or switched from a competitor. Ask yourself: "What progress were they trying to make in their life?" Then dig deeper with these questions: What situation triggered this need? What were they doing before they hired your product? What will they be able to do now that they couldn't before? What does success look like from their perspective? Alternatively, use the job statement template: "When I [situation], I want to [motivation], so I can [expected outcome]." For example: When I'm commuting alone in the morning, I want something to keep me occupied and satisfied, so I can arrive at work feeling ready for the day. When I'm presenting to executives, I want to look prepared with real-time data, so I can maintain my credibility and influence decisions. When I'm onboarding a new team member, I want them to feel productive immediately, so I can avoid weeks of hand-holding. You can apply JTBD to big decisions (choosing an enterprise platform) and small ones (signing up for your newsletter). It's universal across B2C, B2B, and even internal products. Jobs to Be Done in Practice Let's see what JTBD looks like in action. Consider a product manager evaluating different analytics tools. Traditional thinking: "This PM needs analytics. Let's compare features: custom dashboards, SQL access, API integrations, pricing." JTBD thinking: "What job is this PM hiring analytics for?" Possibilities: Job 1: "When executives ask unexpected questions in meetings, I want instant access to data, so I can look competent and maintain trust." Job 2: "When prioritizing features, I want to see which ones drive retention, so I can confidently defend my roadmap." Job 3: "When my engineer asks 'is this worth building?', I want proof of user behavior, so I can get buy-in without endless debates." Each job has different success criteria: Job 1 needs speed and mobile access (for in-meeting queries) Job 2 needs retention cohort analysis and custom segmentation Job 3 needs shareable reports and clear visualizations Same person, same role, different jobs—requiring different solutions. Understanding the job reveals what matters. The Jobs to Be Done Framework for PMs Step 1: Identify the circumstance (the "when") Jobs happen in specific contexts. The situation creates the need. Don't ask: "What do product managers need?" Ask: "When a product manager faces [specific situation], what are they trying to accomplish?" Examples: When a PM is three weeks from a launch and discovers a critical bug... When a PM presents quarterly results to a skeptical executive team... When a PM inherits a product with no documentation... Step 2: Understand the struggle (the "why now") Something creates urgency. What pain became unbearable? What changed? What triggered them to look for a solution today? What was the "last straw" moment? What anxiety or frustration pushed them to act? This reveals competing solutions they've tried and abandoned. Step 3: Define the job (the "what") Express the job as progress, not features. Bad: "User needs a dashboard" Good: "When reviewing weekly metrics, user wants to spot anomalies immediately, so they can prevent small issues from becoming big problems" Bad: "Customer wants faster support" Good: "When a customer hits a blocker, they want to get unstuck without waiting, so they can maintain momentum on their project" Step 4: Map anxieties and habits (the barriers) Two forces prevent customers from hiring your product: Anxieties about the new solution: Will this actually work? Will I look stupid if it fails? Is this worth the effort to learn? What if I can't switch back? Habits of the current solution: They've already paid for the alternative Their team knows the current tool Switching requires explaining to stakeholders The current solution is "good enough" These forces must be overcome, not ignored. Step 5: Define success (the "so I can") What does life look like when the job is done? What can they do now that they couldn't before? What anxiety went away? What does this enable them to do next? How do they measure that the job was successful? This is your North Star—not adoption, but progress. Real Product Scenarios Through JTBD Lens Scenario 1: Why Do PMs Use Notion/Confluence/Docs? Surface answer: "For documentation" JTBD analysis: Job 1: "When I'm interrupted with the same question repeatedly, I want a single source of truth to point to, so I can stop being everyone's memory." Job 2: "When new PMs join, I want them to understand context without 20 meetings, so I can focus on actual work instead of onboarding." Job 3: "When stakeholders question my decisions, I want a paper trail of reasoning, so I can defend choices without looking defensive." Insight: People aren't hiring docs for "documentation"—they're hiring them to reduce interruptions, scale themselves, and create accountability. A tool that serves Job 1 might fail at Job 3. Scenario 2: Why Do Teams Switch to Your Project Management Tool? Surface answer: "Our competitor was too expensive" JTBD analysis reveals the real job: Situation: PM inherited a project with dependencies across 5 teams Struggle: Current tool made dependencies invisible; discovered blockers only in status meetings Job: "When I'm managing cross-team work, I want to see dependency risks automatically, so I can unblock teams before they're stuck, not after." Success: No surprises in status meetings, teams stay unblocked Insight: Price wasn't the job—visibility was. They would have paid more for your tool if it solved the real problem. Now you know what to emphasize in messaging and what to build next. Scenario 3: Why Do Customers Churn? Traditional analysis: "Low engagement, didn't use key features" JTBD analysis: Original job: "When onboarding clients, I want them to see immediate value, so I can close deals faster and reduce sales cycle." Why they left: Product helped close deals (job done!), but created a new job they hadn't anticipated: "When clients ask advanced questions, I need to become a product expert, so I don't lose credibility." Reality: They hired your product for one job, it created a different job they weren't prepared for, so they "fired" you. Insight: Churn wasn't about your product failing—it was about creating unexpected work. Solution: Better training, or simpler product that doesn't require expertise. Scenario 4: Why Isn't This "Must-Have" Feature Getting Adopted? Your thinking: "Users said they needed bulk editing. We built it. Why isn't anyone using it?" JTBD investigation reveals: Job they told you: "I need to edit multiple items at once" Real job: "When I make a mistake, I want to fix it quickly without embarrassment, so my manager doesn't notice." Problem: Bulk edit requires CSV export, edit in Excel, re-import—three steps with potential errors—making the "fix quickly" job harder, not easier. Insight: They told you what they wanted (bulk edit), not what job they were trying to do (fix mistakes quickly). Build inline multi-select editing instead. The JTBD Interview: Five Questions That Reveal Everything When interviewing customers, forget feature discussions. Ask these: Question 1: "Tell me about the first time you realized you needed something like this." This reveals the circumstance and emotional trigger. Listen for frustration, anxiety, or a specific moment of clarity. Question 2: "What were you using before? Why did you stop?" This reveals competing solutions and why they failed to do the job. Your real competitors emerge here. Question 3: "Walk me through the day you decided to try our product." This reveals the tipping point—what made today different from yesterday? What urgency existed? Question 4: "What were you worried about when you first started using it?" This reveals anxieties that almost prevented them from hiring you. These anxieties still exist in your prospects. Question 5: "How do you know it's working? What changed?" This reveals their definition of success—often very different from yours. This is your real value proposition. Jobs vs. Features vs. Benefits Understanding the hierarchy helps you communicate value: Features: What your product has "We have real-time collaboration, version history, and 200+ integrations" Benefits: What features enable "You can work together seamlessly, track changes, and connect your tools" Jobs: Why customers actually care "When your remote team is spread across timezones, you want everyone to stay aligned without meetings, so you can ship faster without confusion" Features describe your product. Benefits describe capabilities. Jobs describe customer progress. Marketing that leads with jobs resonates because it mirrors customers' internal dialogue. Common JTBD Mistakes Product Managers Make Mistake 1: Confusing Jobs with Tasks Tasks are activities. Jobs are progress. Task: "Send an email" Job: "When I need a decision from my boss, I want to make my case persuasively, so I can move forward without delays" Email is how they accomplish the job, not the job itself. Mistake 2: Assuming One Product = One Job Products get hired for multiple jobs by different customer segments. Slack gets hired for: "Reduce email overload" "Make remote work feel connected" "Keep conversations searchable and organized" "Look like a modern company" Each job requires different messaging and feature prioritization. Mistake 3: Taking Feature Requests at Face Value "I need a dark mode" might actually mean: Job: "When I work late at night, I want to avoid eye strain, so I can stay productive without headaches" Or: "When showing my screen in meetings, I want to look like I use modern tools, so I maintain credibility with my team" One job needs dark mode. The other needs status. Mistake 4: Ignoring Emotional Jobs Jobs have functional and emotional dimensions. Functional: "Calculate my taxes correctly" Emotional: "Feel confident I won't get audited" The emotional job often matters more than the functional one. Mistake 5: Forgetting Social Jobs How do customers want to be perceived? "When I recommend this tool to my team, I want to look like I'm on top of new trends, so I can maintain my reputation as an innovator" This explains why people choose trendy tools over better ones. Building Your JTBD Practice Week 1: Job Listening Listen to customer conversations differently. When someone mentions a problem, reframe it as a job: "This is slow" → Job: "When I'm rushing to meet a deadline, I want tools that don't slow me down, so I can deliver on time" "I can't find anything" → Job: "When I'm looking for past work, I want to find it instantly, so I don't waste time recreating things" Week 2: Interview Three Customers Pick three recent customers. Use the five JTBD questions above. Don't pitch, just listen. Record and transcribe if possible. Look for patterns in: Triggering circumstances Competing solutions they tried Anxieties they overcame How they define success Week 3: Map Your Product to Jobs List your features. For each, identify what job customers are hiring it for. You'll discover: Features serving the same job (consolidation opportunity) Jobs with no good solution (build opportunity) Features serving no job (removal opportunity) Week 4: Rewrite Your Messaging Take your homepage or sales deck. Rewrite one section using job language: Before: "Powerful analytics and customizable dashboards" After: "When executives ask unexpected questions, get answers instantly without scrambling through spreadsheets" Test it with customers. Which version resonates more? JTBD for Different Product Decisions For feature prioritization: "Which job is most painful and underserved right now?" For competitive positioning: "What job do we do better than alternatives, and for whom?" For pricing: "How much is solving this job worth to customers in this circumstance?" For onboarding: "What's the minimum progress needed for customers to believe we can do the job?" For messaging: "What circumstance and job should our homepage lead with?" For customer segmentation: "Group customers by job, not by demographics or company size" Jobs to Be Done vs. Other Frameworks JTBD vs. Personas: Personas: "Who is the customer?" (Demographics, behaviors) JTBD: "What progress does the customer want to make?" (Motivation, context) Together: Personas show you who; JTBD shows you why JTBD vs. User Stories: User Stories: "As a [role], I want [feature], so that [benefit]" JTBD: "When I'm in [situation], I want to [make progress], so I can [outcome]" Difference: JTBD focuses on circumstance, not role. Multiple roles can have the same job. JTBD vs. Value Proposition: Value Prop: What you offer JTBD: What customers are trying to accomplish Connection: JTBD informs your value prop by revealing what customers actually value The Bottom Line Features answer "what does it do?" Benefits answer "what can I do with it?" Jobs answer "why do I need it?" Most product teams stop at features. Good teams reach benefits. Great teams understand jobs. When you understand the job, everything clarifies: What to build next (features that do the job better) Who to target (people in the circumstance where the job arises) How to message (speak to the progress they want to make) How to price (based on job value, not feature count) Why people churn (the job changed or your product stopped doing it) Start with one customer conversation this week. Ask not what they want, but what they're trying to accomplish. You'll be surprised how different the answers are—and how much clearer your product strategy becomes. Because customers don't want your product. They want to make progress. Your job is to help them do it better than any alternative. Quick Reference Card The Core Question: "What job is the customer hiring this product to do?" Job Statement Template: "When I [situation], I want to [motivation], so I can [expected outcome]" Five JTBD Interview Questions: When did you first realize you needed something like this? What were you using before? Why did you stop? Walk me through the day you decided to try our product What worried you when you first started? How do you know it's working? What changed? Jobs Have Three Dimensions: Functional (get the task done) Emotional (feel a certain way) Social (how others perceive me) Remember: Features = What it does Benefits = What I can do Jobs = Why I need it
First Principles Thinking
Why Product Managers Need First Principles Thinking Your competitor just launched a feature. Your CEO wants it by next quarter. Your engineers start architecting. But nobody asks the real question: "Should we even build this?" First principles thinking is how Elon Musk designed reusable rockets, how Netflix pivoted from DVDs to streaming, and how the best product managers avoid building "me-too" features that waste months of development. Instead of copying what exists or accepting "that's how it's done," first principles thinking strips problems down to fundamental truths and rebuilds solutions from the ground up. It's the difference between innovation and imitation. For product managers, this framework transforms how you approach feature requests, competitive threats, technical constraints, and customer problems. It's your superpower for finding breakthrough solutions hiding in plain sight. What Is First Principles Thinking? First principles thinking is reasoning from foundational truths rather than by analogy or convention. Traditional thinking: "Our competitor has a dashboard, so we need a dashboard." First principles thinking: "What problem are customers trying to solve? What's the most effective way to solve it? Is a dashboard actually the answer, or is it just familiar?" The process has three steps: Step 1: Identify and challenge assumptions List everything you believe to be true about the problem. Then ruthlessly question each assumption. Step 2: Break down to fundamental truths Strip away assumptions until you reach facts that are undeniably true—the foundational reality. Step 3: Rebuild from the ground up Use only those fundamental truths to construct new solutions, free from conventional constraints. The Product Manager's First Principles Framework Question 1: What problem are we actually solving? Most feature requests come disguised as solutions. "We need a mobile app" isn't a problem—it's a solution. The problem might be "customers can't access our service on the go." Dig deeper with five whys: Why do customers want mobile access? Why can't they use our web version on mobile? Why is the web experience inadequate? Why haven't we optimized for mobile browsers? Why did we deprioritize responsive design? You might discover the real problem isn't platform—it's load time. A progressive web app might solve this faster than a native app. Question 2: What are we assuming to be true? Common PM assumptions to challenge: "Users want more features" (Maybe they want fewer, better ones) "We need to match competitor features" (Maybe your differentiation is doing less) "This requires custom development" (Maybe there's an API or integration) "Users won't change their behavior" (Maybe with the right incentive, they will) "We can't charge for this" (Maybe it's your most valuable feature) Question 3: What's fundamentally true? Look for physics-level truths about your problem: What physical or mathematical constraints exist? What human psychological needs are at play? What economic realities govern the situation? What technical capabilities definitively exist or don't exist? These become your building blocks. Question 4: What would we build if starting fresh today? Ignore your existing codebase, current architecture, and legacy decisions. If you were solving this problem for the first time with today's technology and knowledge, what would you create? Often, the gap between this answer and your current solution reveals technical debt masquerading as "best practice." Real Product Problems Solved with First Principles Case 1: The "We Need a Chatbot" Request Traditional approach: Build a chatbot because everyone has one. First principles breakdown: Assumption: Customers want to chat with us Truth: Customers want answers quickly without friction Challenge: What if they don't want to chat at all? Discovery: Most questions are repetitive. 80% ask the same 10 things. Solution: Smart FAQ with predictive search + one-click actions. No chatbot needed. Built in 2 weeks instead of 3 months. Case 2: Competing with a Cheaper Competitor Traditional approach: Cut prices or add more features to justify cost. First principles breakdown: Assumption: We're competing on price/features Truth: Customers are trying to achieve a business outcome Discovery: Cheaper tool requires 20 hours of setup. Ours takes 2 hours. Reframe: We're not more expensive—we save 18 hours of labor worth $900. We're actually cheaper. Solution: Change positioning, not pricing. Add time-to-value calculator. Convert 30% more enterprise leads. Case 3: The Slow Dashboard Problem Traditional approach: Optimize database queries, add caching, upgrade servers. First principles breakdown: Assumption: Dashboards must load all data in real-time Truth: Users make decisions based on trends, not live data Challenge: What if the data being "slow" isn't the problem? Discovery: 90% of views are for weekly/monthly trends. Only 10% need real-time. Solution: Pre-compute aggregates daily. Keep real-time optional. Dashboard loads in 0.8s instead of 12s. Server costs drop 40%. Case 4: Feature Parity with Competition Traditional approach: Build everything competitors have to "stay competitive." First principles breakdown: Assumption: Customers switch because of missing features Truth: Customers switch because they're not achieving their goal Discovery: Customers list 12 features they want. They actually use 3 regularly. Insight: Competitors have feature bloat. Complexity is their weakness. Solution: Build 3 features exceptionally well. Market as "focused simplicity." Win customers tired of complexity. Four-Step Exercise for Your Next Feature Request Next time a stakeholder requests a feature, run this mental model: Step 1: Translate the request (2 minutes) Request: "We need Slack integration" Translation: "What outcome do they want that Slack integration would provide?" Discovery: "Get notified when high-value leads take key actions" Step 2: List your assumptions (3 minutes) We need to integrate with Slack specifically Users want notifications in a chat tool Real-time notifications are necessary This requires engineering work Slack is where users spend their time Step 3: Challenge each assumption (5 minutes) Why Slack and not email, SMS, or webhooks? Do they want Slack, or just timely notifications? Would a daily digest work instead of real-time? Could we use Zapier instead of custom integration? Are users actually in Slack all day? Step 4: Identify core truths and rebuild (10 minutes) Truth: Sales team needs to respond to high-intent actions quickly Truth: Different users have different notification preferences Truth: Over-notification creates alert fatigue Rebuild: Create a smart notification system with user-configurable channels (Slack, email, SMS, webhook) and intelligent batching. Slack is one option, not the solution. Result: You just built a more flexible, valuable feature than the original request. When NOT to Use First Principles Thinking First principles thinking is powerful but expensive. It requires deep thought and can slow decision-making. Don't use it for: Minor iterations: A/B testing button colors doesn't need philosophical deconstruction. Time-critical decisions: Production is down? Follow the incident playbook, not Socratic questioning. Well-understood problems: User authentication? Use established security practices, not reinvented cryptography. Low-impact features: Small quality-of-life improvements don't need existential analysis. Use first principles for: Major strategic decisions Competitive threats Resource-intensive features Technical architecture choices Market positioning When you're stuck or making no progress Common Traps and How to Avoid Them Trap 1: Analysis Paralysis First principles thinking can spiral into endless questioning. You never build anything. Fix: Set a time box. Spend 30-60 minutes on first principles analysis, then make a decision with what you've learned. Perfect understanding isn't the goal—better understanding is. Trap 2: Reinventing the Wheel Questioning everything doesn't mean building everything from scratch. Sometimes the existing solution is optimal. Fix: After your analysis, explicitly decide: "Should we use the conventional approach or build something new?" Both are valid outcomes. First principles thinking can confirm that the standard solution is actually best. Trap 3: Dismissing Domain Expertise "That's how we've always done it" deserves scrutiny, but domain experts often know why conventions exist. Fix: Include experienced team members in the analysis. Ask them "Why is this the standard approach?" Their answers either validate the convention or reveal outdated assumptions. Trap 4: Forgetting Practical Constraints Pure first principles thinking ignores real-world limits like budgets, timelines, and team capabilities. Fix: After rebuilding from fundamentals, layer back practical constraints: "Here's the ideal solution. Here's what we can actually build in Q3 with our current team." Building Your First Principles Muscle Week 1: Observe and Document When someone proposes a solution, write down the hidden assumptions. Don't challenge yet—just practice identifying them. Week 2: Question One Thing Pick one upcoming decision. Apply the four-step exercise above. Compare the outcome to your original approach. Week 3: Share Your Thinking In your next product review, walk stakeholders through your first principles analysis. Show how you moved from request to root problem to solution. This builds buy-in and teaches others the framework. Week 4: Make It Routine Add a first principles checkpoint to your product decision template: What assumptions are we making? What's fundamentally true? What would we build if starting fresh? First Principles Questions to Keep Handy Copy these to your notes for quick reference: About the problem: What problem are we solving? (Not: what are we building?) Who specifically has this problem? What's the cost of not solving it? How do they solve it today? About assumptions: What are we assuming must be true? Which assumptions can we test quickly? What if the opposite were true? What would need to change for this assumption to be false? About constraints: Which constraints are real, and which are inherited? What becomes possible if we remove this constraint? What technology exists today that didn't exist when we made this decision? About solutions: What's the simplest version that solves the core problem? What would this look like if we started today? Are we solving the problem, or copying a solution? The Bottom Line First principles thinking isn't about being contrarian or rejecting all conventions. It's about understanding why things are the way they are—and having the clarity to change them when they shouldn't be. Most product managers build incrementally on what exists. The best ones occasionally step back and ask whether the foundation itself still makes sense. You won't use first principles thinking every day. But when you face a strategic decision, competitive pressure, or are simply stuck in "that's how we do it" thinking, it's your escape hatch to breakthrough solutions. Start with one decision this week. Question the assumptions. Find the fundamental truths. Rebuild from there. You might just discover that the "obvious" solution was solving the wrong problem all along. Quick Reference Card When to use: Strategic decisions, competitive threats, resource-intensive features, when stuck The three steps: Challenge assumptions (What are we taking for granted?) Find fundamental truths (What's undeniably true?) Rebuild from scratch (What would we create starting fresh?) Key questions: What problem are we actually solving? What are we assuming? What would we build if starting today? What's the simplest version that works?