For fifteen years I have watched capable people build technology to address urgent problems: healthcare access, educational equity, economic mobility, disability inclusion, community resilience. These founders are talented. They understand their markets. They have traction, testimonials, and genuine human impact. Many also burn out, break down, and exit at rates that deserve attention, taking their companies with them.
The sector often points to capital markets that weren't designed for hybrid business models, an ecosystem that leaves impact founders stranded between venture capital and nonprofit infrastructure, measurement frameworks that fail to capture social value, accelerators that apply the wrong playbooks, and investors who don't understand the territory.
Those factors matter. The cause I want to focus on is something the sector has rarely named. Until it does, much of what gets built may not address the problem.
The Wrong Variable
After fifteen years of building in this space, studying founders who thrive and those who don't, and examining my own patterns, I have come to believe the following.
A major cause of why impact tech founders fail at distinctive and predictable rates is cognitive. It lives inside the founder.
Impact founders drawn to this work by something beyond market opportunity often operate from what I call an empathic default mindset. It is a cognitive orientation that prioritizes emotional attunement, human context, and moral alignment when making decisions. It is the wiring that makes them feel obligation. It makes them want to understand the person on the other side of a transaction before they think about the transaction itself. It makes them physically uncomfortable with decisions that optimize efficiency at the expense of human complexity.
This wiring is what makes many of them suited to the work. Building technology that genuinely serves people who have been left behind by prior waves of innovation often requires feeling why that matters. The empathic default is part of that.
The problem is that empathy, without structure, guardrails, or explicit recognition of how it operates, can become a source of business-destroying decisions. Because the sector has rarely named it as such, empathy-wired founders often keep making the same decisions in the same patterns, blame themselves, burn out, and eventually leave. Many don't recognize what's happening because the sector has seldom given them the language to see it.
That is the misdiagnosis. The sector often treats a cognitive phenomenon as a structural one. The solutions it keeps building, such as better funding vehicles, accelerator programs, and measurement frameworks, can end up solving for the wrong variable.
What Empathy Does When It Goes Unnamed
I built my first impact-tech company in Detroit in the early 2010s. Birdhouse Health was a pediatric care coordination platform, software that helped parents navigate their children's medical needs across specialists, appointments, insurance, and a system that was not designed for them. I understood the problem with my whole body. I had lived it.
We built something that mattered. Parents told us we had changed their lives. Clinicians said we were solving a real problem. I priced our service at what families could afford.
When a single mother asked for an extended payment plan, I said yes. When a clinic asked for custom features that weren't in scope, I said yes. When a team member needed support that went well beyond my role as their employer, I provided it. I thought I was being human-centered and modeling the values I wanted the company to embody.
What I was doing was running many material business decisions through an empathic lens, without realizing that empathy, when applied indiscriminately to decisions that require different tools, creates distortion. I was succeeding at empathy. The business model could not survive it.
It took me years to understand what had happened. When I did, I started to see a similar pattern in founders I mentored, in companies I advised, and in the failure rate of builders the sector had decided to call "mission-driven," as though that explained something.
Empathy-wired founders often underprice because charging full value feels like abandoning the people they exist to serve. They overextend because saying no to someone in need violates something deep in their operating system. They avoid difficult conversations with investors, team members, and customers because conflict triggers empathic discomfort similar to harm, and their nervous systems treat them similarly. They over-identify with their companies, their users, and their mission to the point where the boundaries between self and work dissolve, and burnout becomes likely.
This behavior follows a coherent internal logic when you understand the cognitive wiring underneath it. That is why naming the wiring matters. Interrupting a pattern you cannot see is hard. Building guardrails around a dynamic you have not had language for is hard.
The Playbook Problem
The startup ecosystem runs on playbooks: lean startup, blitzscaling, growth hacking, jobs-to-be-done, product-market fit. These frameworks were built by and for founders operating from what I think of as an efficiency default mindset, founders wired to optimize for leverage, speed, competition, and scale, for whom the transaction is the primary unit of analysis and the human complexity on either side is a variable to be managed.
That is an observation about cognitive diversity. Those founders have built things that work. Their playbooks work for them, in their context, with their wiring.
When empathy-wired founders apply those playbooks uncritically, they often fracture. The frameworks assume decision-making processes that do not match their actual operating system. They are asked to "move fast" when their instinct is to pause and understand impact. They are told to "charge what the market will bear" when their reflex is to charge what the person in front of them can afford. They are coached to "fire fast" when personnel decisions run through an empathic filter that treats termination as a form of harm.
The guidance can be wrong for this cognitive profile when applied without translation. Because the sector has rarely recognized empathy-wired founders as a distinct archetype, it has seldom built the translation layer. It keeps handing them the same maps.
The Cost That Often Goes Uncounted
When empathy-wired founders exit the sector, burned out and underfunded, having convinced themselves they were not built for business, several things happen that often do not appear in failure post-mortems.
The solutions they were building often do not get picked up by the next team with the same approach. The problems they were solving, such as healthcare gaps, educational inequity, and accessibility failures, do not pause while the sector figures out what went wrong. The communities they were serving, many of whom had extended trust based on the human quality of the founder's engagement, absorb another lesson about what to expect from people who say they want to help.
Something subtler happens too. The sector self-selects. Over time, at the margin, the companies that survive skew toward founders who found it easier to hold the line on pricing, to say no without anguish, to treat the business as a mechanism and the mission as a goal rather than an identity. That is selection pressure. It has consequences.
Impact tech, at its best, is meant to build differently, to ask different questions, to hold human complexity inside its products and organizations. When the cognitive diversity that makes that possible gets selected out for a diagnosable reason, the sector can gradually come to resemble the systems it was built to replace.
That loss is significant. It is happening in slow motion.
Technology Encodes Values
Underneath the founder pipeline problem is the problem of what gets built.
Technology encodes values. Systems carry the assumptions, priorities, and blind spots of the people who built them. We have seen what can happen when technology is built largely by efficiency-default thinkers: social media algorithms optimized for engagement over wellbeing, healthcare platforms for billing over care, financial systems for return over access, economic engines that created wealth while leaving many communities structurally behind. Their wiring shaped what they optimized for; often few in the room were asking a fundamentally different question.
Empathy-wired founders often build different assumptions into the product layer. They ask who gets left out before the feature ships. They feel the edge cases because the edge cases are often why they started building. They carry the person on the other side of the interface into the design meeting as a human being whose experience is as real and complex as their own.
That is a different set of inputs into consequential decisions. When those inputs are removed from the room because the people who carry them burn out and exit for a diagnosable, preventable reason, the outputs change. The products change. The defaults that many people will interact with are set by people who may not be wired to ask certain questions.
Software scales. A value encoded into a platform at founding gets replicated many times before anyone notices it is there. The cognitive diversity, or absence of it, present at the moment of creation has compounding consequences that outlast individual founders, funding cycles, and policy debates.
The Founding Moment
We are inside a consequential technological transition.
Artificial intelligence, biotechnology, and networked platforms are rewriting how we learn, work, receive care, form identity, govern ourselves, and understand what is real. The decisions being made now, in boardrooms and product roadmaps and training datasets, will shape human experience for a long time.
Major technological transitions have often carried a fork in the road, a moment where the values embedded in the new infrastructure could have gone differently. The printing press, the industrial revolution, the internet: each arrived with transformative potential and was captured early by people optimizing for power and efficiency. The consequences of those captures are still playing out.
We are inside that kind of moment again. The people most naturally wired to ask the questions that could make this one go differently are burning out on the margins, for a reason that has often gone unnamed.
The empathy-wired founders leaving the sector are losing their companies and being removed from the room where consequential design decisions are being made. They are being removed quietly, through a process that looks like individual failure and often has systemic causes.
What We Are Really Losing
"Impact tech underperforms its potential" does not fully capture what is at stake.
What we are at risk of losing includes the institutionalization of care: the instinct to see another person's suffering and feel obligated to respond, the reflex that says this person's situation is my problem too. That instinct is cultivated or eroded. The systems we build strengthen it or diminish it.
There is evidence of what happens when the systems mediating human connection are built without this capacity. Loneliness is a public health concern. Trust in institutions is low. Political systems fracture along lines of who feels seen. Communities that were isolated can become more so. Technology that was meant to connect us has in some ways made it harder to tolerate each other's full humanity.
That is the compounding output of systems built by people optimizing for something other than human flourishing. We are building the next layer, systems that will mediate healthcare decisions, educational paths, economic opportunities, and the experience of being known and cared for by institutions. The conversation about who gets to build them often has a blind spot about what cognitive diversity actually means.
The Automation of Indifference
Indifference rarely announces itself.
It rarely arrives as a decision. People seldom hold a meeting and vote to stop caring. It happens gradually, then invisibly, then as the new normal. Small optimizations away from human complexity. Edge cases deprioritized because they do not move the metric. Vulnerable people abstracted into data points because the system was not built to hold their full humanity. Little of it feels like a moral failure in the moment. It feels like efficiency.
At some point the capacity to perceive the harm can be gone, because the people who would have perceived it were not in the room, the systems that would have surfaced it were not built, and the language that would have named it was not developed. You cannot course-correct toward a value you have lost the ability to see.
Empathy can disappear from institutions through many small defaults that each seemed reasonable at the time.
This moment is distinctive in that we are building cognitive infrastructure that will shape how future institutions think. AI systems trained on the outputs of empathy-depleted platforms can encode that depletion as baseline. Algorithms optimizing for engagement in a low-trust world can define engagement in a low-trust way. The models being trained now will train the models that come after them. The values present or absent at this layer will propagate forward with a fidelity and speed that few previous technologies have matched.
There is a trap: empathy-depleted tools may not be able to measure the cost of empathy depletion. The instrument and the subject of measurement overlap. An audit conducted by systems that lack the capacity to perceive human complexity may not find evidence of that absence. It may find efficiency metrics and engagement numbers and conclude that things are functioning as designed. In a sense they are.
The Window Is Closing
Previous technological transitions often left room for correction. The printing press enabled propaganda and reformation. The industrial revolution enabled exploitation and labor movements. The internet enabled surveillance capitalism and organizing, resistance, connection across difference. The harms were real; the capacity to recognize them, name them, and fight back often remained. Another generation could often see what had gone wrong and try again.
The assumption that we will have another chance underlies many arguments that say "we will figure it out later."
It may not hold this time.
We are building systems that think, learn, adapt, and make decisions at a scale and speed that human institutions struggle to monitor. We are building them during a window, narrow and closing, before they become self-reinforcing in ways that may be hard for a subsequent generation of reformers to undo.
The window in which empathy-wired founders could still enter the room, shape the defaults, and ask questions that efficiency-default systems tend not to ask is a function of where we are in the buildout of this infrastructure. We are not early.
Quarters that pass without naming the misdiagnosis, without building the support systems, without keeping empathy-wired founders in the game long enough to matter, bring us closer to a threshold past which the question of who built these systems becomes largely historical.
What the Right Diagnosis Makes Possible
The misdiagnosis is fixable. The window has not yet closed.
If we name the empathic default mindset as a real, legitimate, and distinct cognitive profile, we can build what has been missing.
Self-awareness tools that help empathy-wired founders recognize their own patterns in real time. Frameworks that surface when empathy is driving a pricing decision, a hiring choice, or a boundary that will not hold. The difference between feeling and deciding is often not instinctive for founders wired this way. It can be taught when someone names what is happening.
Playbooks built for how these founders actually think. Pricing frameworks that make charging sustainable rates feel like a form of mission integrity. Decision-making guides for the moments where the empathic reflex is strongest and most likely to cause harm: conflict, fundraising, personnel decisions, market positioning. These largely do not exist yet in meaningful form.
Operational systems, such as GTM strategies, revenue operations frameworks, and governance structures, designed to carry the weight of a high-empathy organization without breaking under it. Systems that protect the mission from well-intentioned but unsustainable decisions without requiring founders to suppress the wiring that made them suited to the work.
A typology of founder that changes how accelerators screen candidates, how investors evaluate potential, how mentors provide guidance, and how founders see themselves. The language "empathy-wired founder" is legitimizing. It says: this is a real thing, it has a name, here is how it works.
Community infrastructure: networks and advisor benches made up of people who have navigated this specific terrain, rather than generic startup mentors applying playbooks designed for a different operating system.
The through-line: solutions built for the cognitive profile, around it, in service of it.
The Question Underneath Everything
Philosophers have a name for the capacity to recognize another person as fully human, as someone whose inner life is as real and complex as your own: moral imagination. It is a root of many ethical systems. It is what makes solidarity and justice conceivable and the arc of history capable of bending.
Moral imagination requires cultivation and practice. It requires systems that reflect it back to us and reinforce it as real and worth protecting.
We are handing the cultivation of moral imagination over to systems built largely without it. The people most wired to fight for the right outcome, who feel the stakes as an obligation, are burning out and leaving for a reason that has often gone unnamed.
The significance of that extends beyond sector, founder pipeline, or capital efficiency.
It touches the question of whether powerful technologies will be built in a way that makes us more human or less. Whether the systems that will mediate how human beings care for each other over the next fifty years will carry empathy as a feature or treat it as a bug. Whether, a generation from now, the world these systems have shaped will have lost the capacity to remember what care was for.
That is what is at stake in the misdiagnosis. That is why naming it matters. That is why now.