Why You're (Probably) an Unreliable Judge of Your Own Work
How cognitive biases make self-assessment untrustworthy, and why iteration with external feedback is key.
Ever finished a project—a piece of writing, a design, even just an email—and felt a nagging uncertainty about its quality? You pour hours in, refine details, and yet... doubt lingers. It turns out this isn't just imposter syndrome. There's a well-documented cognitive reason: our internal judgment about our own work is often surprisingly unreliable.
This post explores why this happens, how it connects to established cognitive biases backed by research, and why embracing iteration and external feedback is crucial for growth, impact, and actually getting things done.
The Self-Assessment Paradox
An underappreciated dimension of critique relates to a well-documented cognitive phenomenon: people often struggle with accurate self-assessment. Research, notably by Dunning and Kruger, shows that our ability to evaluate our own competence is often flawed. This difficulty can mean we're better at judging others' work than our own, simply because an external perspective bypasses many of our internal biases.
Whether you're writing code, designing interfaces, crafting marketing copy, or even evaluating the output of an AI tool, creators frequently misjudge value and effectiveness. While sometimes this manifests as overconfidence (especially among novices, a key finding of the Dunning-Kruger effect), often, particularly for those with more experience, it results in underestimating the value or competence of one's own work (or the utility of a tool's output).
This self-critical bias isn't just random; it stems from several cognitive factors:
Familiarity Blindness & Hindsight Bias: We know every messy step, every discarded draft, every shortcut taken during creation. This intimate knowledge makes the final product seem less novel or impressive to us than to an outsider who only sees the polished result. This is amplified by Hindsight Bias – the tendency to see past events as having been predictable. Because we know the outcome (the finished piece), the process can feel less innovative in retrospect.
Expertise Drift & The Curse of Knowledge: As our skills grow, so do our standards. We start judging our work by advanced criteria that newcomers or the target audience might not share or even perceive. This is the "Curse of Knowledge" – a cognitive bias where experts find it difficult to imagine not knowing something they know, making it hard to adopt the perspective of a novice. Consequently, experts often underestimate how clear or valuable their "basic" explanations or simpler creations might be to others.
Preference Mismatch: We naturally optimize for our own tastes and internal definitions of "good." But our preferences aren't universal. What we think is brilliant, elegant, or necessary might not align with what our audience actually needs, understands, or finds effective.
Real-World Examples Illustrating These Biases:
A seasoned writer worries their article is "too simplistic," but beginner readers find it perfectly clear and exactly the introduction they needed (classic Curse of Knowledge).
A designer scraps a clean, "simple" user interface for something more feature-rich, but user testing later reveals the simple version led to higher task completion rates (Preference Mismatch clashing with objective user needs).
An expert programmer dismisses an AI-generated code snippet as "too basic," yet it's perfectly functional and understandable for the junior developer it was intended to help (another manifestation of the Curse of Knowledge when evaluating external outputs based on one's own advanced standards).
A developer hesitates to share a functional internal tool because the underlying code feels "hacky," only to find colleagues praise it for solving their core problem efficiently, regardless of internal elegance (Familiarity Blindness obscuring external utility).
“The work we’re most critical of (our own or even a tool's output evaluated through our biased lens) is often the most valuable to others precisely because our self-assessment is unreliable.”
The Implication: Trust External Judgment More Than Your Gut
So, what's the practical takeaway from our tendency towards unreliable self-assessment?
Optimize for External Signals: Don't rely solely on your internal "feel" or sophisticated personal standards – as we've seen, they can be unreliable guides. Whenever possible, optimize against clear, external preferences (like user feedback, peer reviews, objective results like A/B tests) or performance metrics.
Build In Outside Feedback Loops: Incorporate external perspectives actively. Seek out peer reviews, conduct user testing, find a mentor, listen to customer feedback. Getting quality feedback is a cornerstone of deliberate practice and skill acquisition. External validation is fundamental for improvement, highlighting blind spots internal reflection might miss.
Reframe "Just Okay": Recognize that the draft you dismiss as "mediocre" might actually be the ideal solution for the intended audience or purpose. Your expert perspective or familiarity might be preventing you from seeing its true value from an outside view.
The same cognitive quirks that often make critiquing easier than creating also suggest that external critique is frequently more reliable for assessment than purely internal judgment.
We often improve faster and more effectively when our performance and creations are reflected back through an external mirror.
When In Doubt: Publish (Or At Least Share)
This leads to one clear, actionable principle, especially powerful given our tendency (particularly if experienced) to underestimate our work's value due to our unreliable self-assessment:
When in doubt, publish. Or at least share.
You are likely prone to underestimating how useful or clear your work is, thanks to biases like the Curse of Knowledge or simply being too close to the project.
What feels "obvious" to you due to your expertise might be a crucial insight for someone else learning the ropes.
What seems "unfinished" by your evolving internal standards might already be more valuable or informative than currently available alternatives for your audience.
What you judge as "not good enough" based on your internal critique could be the perfect entry point or explanation for a beginner.
Sharing allows others—peers, users, mentors—to provide that essential external assessment. But it's not just about getting a one-time verdict. True improvement comes from iteration: repeatedly doing the work, sharing it, gathering external feedback, and then using that feedback to inform the next iteration. This cycle is fundamental. External feedback acts as a crucial guidepost, helping to correct the course set by our often-unreliable internal compass.
Crucially, this doesn't mean blindly accepting all feedback. External input should be filtered through your own goals and understanding. Consider the source, look for patterns, and weigh it against other data points. However, without consistently putting your work out there to get that feedback, your iteration cycle stalls, trapped by the echo chamber of your own potentially flawed self-judgment. This internal hesitation, often driven by perfectionism, is a major barrier to completion. As Jon Acuff argues in his book "Finish," perfectionism is the enemy of done. Letting go of the need for internal perfection and instead embracing the cycle of sharing and iterating is often the only way to truly finish meaningful work.
This principle applies across domains:
Blog posts, code repositories, design prototypes, internal documents, creative projects, even useful AI prompts or configurations you've developed.
If critique is easier and often more objective when coming from the outside, others can help you improve—but only if they can see the thing, repeatedly, as it evolves.
Private perfectionism, fueled by unreliable self-assessment, bottlenecks progress. Public (or shared) iteration, guided by filtered external feedback, unlocks it.
So, the next time you're hesitating, remember: your judgment is likely unreliable in predictable ways. Fight the pull of perfectionism. Let the world (or at least a trusted colleague) offer their perspective. Engage in the cycle. You might be surprised by the value they find and how much faster you improve and, ultimately, finish.
References & Further Reading
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments.
Fischhoff, B. (1975). Hindsight ≠ foresight: The effect of outcome knowledge on judgment under uncertainty.
Birch, S. A. J., & Bloom, P. (2007). The Curse of Knowledge in Reasoning About False Beliefs. (Note: While Camerer et al. (1989) formalized it in economics, Birch & Bloom provide accessible psychological experiments).
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. (Highlights the importance of feedback in skill development).
Acuff, Jon. (2017). Finish: Give Yourself the Gift of Done. (Argues against perfectionism and advocates strategies for completing goals).