Deutsch

Your First Performance Review as a Manager

It's review season. You open the performance management tool and stare at the blank form for the person who's been on your team for six months.

And you realize you haven't been keeping notes.

You remember some things: the product launch where they really came through, one meeting where they seemed disengaged, the project earlier in the quarter where the deliverable was late. But you're not sure of the dates. You're not sure whether the late project was their fault or a dependency. And you have a general sense that they're a solid performer, but "solid" is not something you can put in a box that says "evidence."

This is the moment that teaches new managers the most important lesson about performance reviews: the review is the outcome of a year-long process. If you haven't been doing the process, the review will be hollow.

But you're here now. So let's talk about how to write the best review you can with what you have, and what to do differently going forward.

Key Facts About Performance Reviews

  • Gallup research shows only 14% of employees strongly agree their performance reviews inspire them to improve — meaning roughly 86% walk away unmoved, confused, or demotivated.
  • SHRM surveys find that 95% of managers are dissatisfied with their organization's performance review process, and nearly 90% of HR leaders say reviews don't yield accurate information about employee performance.
  • A CEB/Gartner study of 9,000+ managers and employees found that two-thirds of performance ratings are driven by the rater's own idiosyncratic tendencies rather than the ratee's actual performance — recency bias being the largest single distortion.
  • Gallup reports that employees whose managers give regular, documented feedback are 3.6x more likely to be engaged, and engaged employees are 87% less likely to leave their organization.
  • Harvard Business Review found that roughly 60% of review feedback is delivered verbally with no written record, which correlates with 2x higher rates of post-review disagreement and retention risk.

The Review Is a Summary, Not a Conversation

This is the principle that changes everything about how reviews work well versus badly.

A performance review should never be the first time someone hears hard feedback. Everything in the written review should echo a conversation you've already had. If there's a major development area in the review that the person didn't know was coming, you've already failed them, not in the review, but in the six months before it. This is exactly why giving feedback without defensiveness is a skill worth building well before review season arrives.

The review's job is to synthesize and record. The conversation's job is to align and plan forward.

When this is working, your direct report reads the review before the conversation, and their reaction is "yes, this matches what I expected." Not surprised by the rating. Not blindsided by a criticism. Not reading feedback that they've never heard in any form before.

If they are surprised, the conversation will be about managing the surprise rather than about what comes next. And that's a waste of the only time you have set aside for an honest, forward-looking conversation about their career.

Before You Write: Collect Evidence

The first performance review mistake is writing from memory rather than evidence. Memory is unreliable and biased toward recent events, the "recency bias" that makes the last two weeks feel like the whole year. Research published in Harvard Business Review on performance review accuracy found that recency bias inflates or deflates ratings by an average of one full performance tier, meaning the entire year's work can effectively be invisible if the last six weeks were unusually strong or weak.

To fight this, start a running evidence log. Not a diary, just a simple document where you record:

  • Specific examples of strong work (with dates)
  • Specific examples of misses or development areas (with dates)
  • Feedback you've given formally or informally
  • Goals progress (what was set, what was achieved, what slipped)
  • Things said by others about this person's work (cross-functional feedback, comments in team meetings, etc.)

You should be adding to this document throughout the year. Before a performance cycle even starts. But if you're reading this in review season and you don't have it, start now.

Go through your calendar for the last six months and reconstruct the major events: projects launched, deadlines hit or missed, meetings where this person's contribution was notable. Look at your 1:1 notes. Look at project documentation. Read the feedback you gave in past conversations.

You'll find more than you expected. It's not organized, but it's a starting point.

The Evidence-Based Review Prep Framework

The Evidence-Based Review Prep is a three-step discipline new managers can apply before every review cycle to neutralize bias and produce reviews that hold up under scrutiny. First, evidence collection: pull specific, dated examples from 1:1 notes, project docs, and cross-functional feedback spanning the full review period — not just the last six weeks. Second, bias-check: audit your draft against recency, halo, and leniency biases by asking whether you'd rate this person the same if the last month had gone differently. Third, outcome framing: translate every observation into the [person] did [behavior] in [context], which [effect] structure so the review reads as a record of work, not a verdict on the person.

Use Goals as the Backbone

The most defensible performance reviews are built around the goals you set together at the start of the period. Not your general impression of the person, not their attitude or personality, but: did they achieve what we agreed they would try to achieve?

For each goal:

  • What was the goal?
  • What happened?
  • What was in their control versus outside it?
  • What did they learn or demonstrate through this goal?

If your goals were well-written, specific, measurable, time-bound, this section almost writes itself. If your goals were vague, this is where you pay for it. "Improve communication" doesn't give you much to write against. "Deliver a monthly stakeholder update to the leadership team with at least three data points per update" gives you a lot to work with.

This is the argument for better goal-setting: not just that it helps people focus, but that it makes performance conversations far easier and fairer. Read Setting Goals for a Reluctant Team for how to build this habit.

Write in Behaviors and Outcomes, Not Traits

"She's reliable." Not useful.

"She delivered the Q2 product brief three days ahead of the deadline, which gave the design team enough runway to iterate twice before the launch." Useful.

"He struggles with communication." Not useful.

"In three cross-functional project kickoffs, he didn't share the team's requirements until after the agenda had been set, which created rework in two of those projects." Useful.

The difference is specificity. Trait-based language ("reliable," "communicates poorly," "strong performer") is almost impossible to act on because it doesn't tell the person what to do differently. Behavior-based language gives them a picture of what you observed and what effect it had.

A test: could you tell this story to someone who wasn't in the room? If not, get more specific.

The template to use:

[Person] did [specific thing] in [specific context], which [had this specific effect].

That structure works for both positive and developmental feedback. It keeps you honest and it keeps the review useful.

Never Surprise Someone With Their Rating

If someone doesn't know, broadly, what their rating is going to be before they sit down in the review conversation, something went wrong.

This doesn't mean you have to tell them the exact number in advance. It means that if you're going to give someone a "does not meet expectations" or a very high rating, they should have some signal before the meeting. A conversation where you said "I want you to know that this has been a strong year for you" or "I want to have an honest conversation about where things stand before the formal review."

Surprises in performance reviews land as betrayal. Even if the feedback is accurate, the fact that they didn't see it coming makes them question whether you've been honest with them all year. And it makes the conversation about the surprise, not the path forward.

The preparation conversation doesn't have to be long:

"Before the formal review next week, I wanted to give you a sense of where I'm landing. I think it's been a solid year overall, and there are a couple of specific development areas I'll be noting. Nothing that should be a surprise if you think about our 1:1 conversations, but I wanted you to have context before reading the written document."

Read Giving Feedback Without Creating Defensiveness for the broader principles of how feedback conversations should be set up to avoid defensive reactions.

The Performance Evidence Log Template

Build this before your next review cycle. Even a simple Google Doc works.


[Person's Name]: Evidence Log

Quarter/Period:

Goals set:

  1. [Goal]: [Progress notes]
  2. [Goal]: [Progress notes]

Notable contributions:

  • [Date] [Specific thing they did and its impact]
  • [Date] [Specific thing they did and its impact]

Development areas:

  • [Date] [Specific observation and context]
  • [Date] [Feedback given and response]

Cross-functional feedback:

  • [Date] [Who said what about this person's work]

Feedback I've given:

  • [Date] [Topic of feedback, how it went]

Update this every few weeks, not just before review season. The discipline of writing it down close to the event means your evidence is accurate and your biases don't distort it over time.

How Rework Makes Review Prep Trivial

Most new managers don't dread performance reviews because writing them is hard. They dread them because six months of context is scattered across Slack threads, Google Docs, sticky notes, and half-remembered 1:1s. Rework Work Ops solves this by turning continuous documentation into a byproduct of normal work rather than a separate discipline. Every 1:1 agenda, feedback note, goal check-in, and cross-functional shout-out lives on the person's timeline — tagged by date, searchable by theme, and filterable by review period. When review season arrives, you don't reconstruct the year from memory; you open a pre-built evidence view showing goals-versus-actuals, feedback already delivered, and notable contributions with the exact dates and project links attached. The result: reviews grounded in evidence rather than recency bias, and conversations that feel like a continuation of the year instead of a surprise at the end of it. Work Ops starts at $6/user/month — see Rework pricing — and the evidence trail pays for itself in one review cycle of saved reconstruction time. Teams that adopt continuous documentation typically reduce review-writing time by 60–70% while producing reviews their reports find fairer and less surprising.

The Review Writing Framework

When you sit down to write, use this structure:

Section 1: Goals review For each goal: what was it, what happened, what were the factors, what does it tell you about the person's performance? Be specific. Be fair about what was in their control.

Section 2: Key contributions and impact Three to four specific examples of notable work this period. What did they do, what was the context, what was the impact? These should include their best moments, not just a middle-of-the-road average.

Section 3: Development areas One to three specific things to work on in the next period. Tied to observed behaviors, not personality. Connected to either their career goals or team needs. Each one should have already been discussed in a 1:1 or feedback conversation.

Section 4: Forward look What are the key focus areas for the next period? What skills or experiences would help them grow? What are you committing to support?

Calibrating Ratings Honestly

The most common rating failure is compression toward the middle. New managers give everyone "meets expectations" to avoid conflict or because they don't feel confident differentiating. This helps no one. Gallup's research on performance management shows that employees who receive vague or undifferentiated ratings are 43% more likely to report feeling their effort is unrecognized, a significant driver of voluntary turnover among high performers.

Strong performers need to know their work is recognized as strong. Otherwise they start questioning whether their effort is being seen, and they look elsewhere. People with real development needs need honest ratings to understand that improvement is genuinely required, not optional. Ongoing career conversations throughout the year make it easier to calibrate ratings: when you know what someone is working toward, you have more context for judging whether their performance this year moved them closer or further from it.

Before you submit ratings, ask yourself:

  • Does this rating match the evidence I've collected, or my comfort level?
  • If the rated person saw my evidence and the rating, would they believe it's fair?
  • Am I rating this person based on the whole year, or mainly the last two months?

If you're uncertain about a rating, ask your own manager for input before submitting. Most managers would rather give feedback on a draft rating than find out after calibration that something is out of alignment.

The Rating Conversation Script

When you deliver the review in conversation, consider this opening:

"I want to start by asking: how would you assess your own year? What felt like a strong moment, and what would you do differently?"

This invites them to engage as a participant, not a recipient. Their self-assessment gives you data on their self-awareness, and it makes the conversation collaborative rather than one-directional.

After they share:

"Here's where I landed. [Summary of rating]. The main reasons are [two or three specific things]. I want to walk through the written document together. Ask me questions on anything. I want to make sure this is clear and fair, not just delivered."

Giving them the document to read while you're together is more honest than handing it over beforehand and leaving them to read alone. You can see their reaction and respond in real time.

For situations where someone is getting a hard rating, read Dealing With Underperformance Without Firing. The principles for delivering hard feedback without triggering defensiveness apply here too.

Separate Compensation Conversations If You Can

When you tell someone their rating in the same breath as telling them their raise, the compensation number dominates everything. If it's less than they hoped, they stop listening to the developmental feedback entirely. All of the useful forward-looking conversation gets lost.

Where your organization allows it, separate these: review conversation first, compensation conversation later. Give people time to absorb the feedback before the money conversation. SHRM's guidelines on performance review best practices recommend a minimum 48-hour gap between development feedback and compensation discussion, specifically to prevent pay anxiety from crowding out the learning conversation.

If your organization requires them together, at least establish the developmental conversation before revealing the number. Ask your opening questions, cover the major themes, then move to compensation.

What You're Building Toward

The performance review is backward-looking. But the best use of it is to set up the forward-looking conversation: what the person is building toward next year, what support they need, and what your commitments are as their manager.

When you do that consistently, performance reviews stop being something people dread. They start being an honest checkpoint in an ongoing conversation about the work and the person's career. And Career Conversations That Don't Feel Scripted explores how to keep that forward-looking dialogue going year-round, not just at review time.

Frequently Asked Questions About Giving Your First Performance Review

How long should my first performance review be?

Plan for 45–60 minutes for the conversation itself, with a 15-minute buffer in case it runs long. The written review should be roughly one page per major theme — typically 1.5 to 3 pages total. Longer than that and you're padding; shorter than that and you're probably skipping evidence. The depth of preparation matters more than the length of the document.

Should I give the rating first or the feedback first?

Lead with the self-assessment question, then give a brief summary of where you landed, then walk through the evidence. If you hide the rating until the end, they stop listening to the feedback while guessing at the number. If you drop the rating cold at the start with no context, they react to the label instead of the work. The flow that works: "How would you assess your year?" → your one-sentence summary of the rating and why → walk through specifics together.

What if I disagree with the rating my org expects me to give?

Raise it with your own manager before calibration, not after. Bring your evidence log and make the case in writing. Once calibration is complete, you own delivering the rating even if it wasn't your first choice — but you can and should be honest with your report that you advocated for a different outcome and explain the factors that shaped the final call. Never throw your organization or your manager under the bus in the conversation, but also don't pretend a rating was your preferred outcome when it wasn't.

How do I handle someone who disagrees with my feedback?

Listen fully before responding. Ask clarifying questions: "What did you see differently?" or "What context am I missing?" Sometimes you'll learn something that changes your view — that's fine, adjust. Sometimes you'll hear them out and still hold your position — that's also fine, but name it: "I hear you, and I understand why you see it that way. I still think [X] because [specific evidence]. This is where I'm landing." Agreement isn't the goal; a clear, respectful exchange is.

Should I include compensation talk in the review?

Separate them if your organization allows it, with at least 48 hours between the development conversation and the compensation conversation. When rating and pay land in the same meeting, the pay number dominates and all the forward-looking feedback gets lost. If your org requires them together, cover development first, then transition explicitly: "Now I want to talk about compensation, which is a separate conversation." The switch signals a mode change.

How do I document the conversation after?

Within 24 hours, send a short written follow-up summarizing: the key themes discussed, any commitments you made, any commitments they made, and next steps. Keep it to 5–8 bullets. This creates a shared record, protects both of you from memory drift, and sets up the next 1:1 agenda. Save the note where future-you can find it when writing next year's review — ideally in the same system where you're keeping ongoing evidence.

What should I do if I've given no formal feedback all year and now have to write a review?

Be honest with yourself about what you actually observed versus what you're inferring. Stick to behaviors and outcomes you can point to. Skip criticism that the person hasn't heard before — save it for the forward-look section framed as a development area starting now, not as a judgment of the year just finished. Then commit out loud to changing how you'll give feedback next period, and follow through with monthly check-ins.

Can I use AI to help draft the review?

AI is useful for reorganizing evidence you've already collected into clearer prose, checking tone, or suggesting phrasing. It's not useful for generating content you don't have evidence for — that's how you end up with plausible-sounding but hollow reviews your report will see through. Rule of thumb: AI can restructure, AI cannot invent. If you're asking AI to write about contributions you don't remember, you haven't done the prep work.

Learn More