Skip to content
Back to Leadership
Strategy & Execution · 10 min read

What Flying Taught Me About Running a Business

Cockpit principles that transfer directly to business -- from data-driven weather decisions and continuous course corrections to crystal-clear targets and the discipline of checklists.

Every few weeks I drive out to Schönhagen, a quiet grass-and-asphalt airfield south of Berlin, preflight my Cirrus SR22T, and go somewhere. The routine is always the same: check NOTAMs (Notices to Airmen — temporary hazards, runway closures, restricted airspace), pull up weather, file a flight plan, run the checklist, and launch. After twenty-five years building and leading technology companies — four exits, a decade of product leadership, and now running tech at IDnow — I keep noticing how much of what makes a safe pilot also makes a competent executive. Not as a cute metaphor. As a literal operating system.

Know exactly where you’re landing before you take off

Before I start N111DU’s engine, I know my destination airport, the active runway, the instrument approach I’ll fly if the weather deteriorates, the alternate airport if that approach doesn’t work out, and the fuel I need to get to all of those places plus a legal reserve. That’s not optional. It’s regulation, and it’s survival.

Business teams start projects all the time without anything close to that level of target clarity. “Improve conversion” is not a destination — it’s a compass heading at best. A real target looks more like “reduce drop-off between identity-capture and liveness check by 15% for mobile users in the DACH region by end of Q3, measured in our Amplitude funnel.” That’s a runway. You can plan an approach to it. You can calculate whether you have the fuel — the people, the time, the budget — to get there. And you can name your alternate: if the data shows the problem is actually in document upload latency rather than the liveness step, you know where to divert.

I’ve watched smart teams burn months because nobody pinned down what “done” looked like before they started building. In aviation, that ambiguity kills people. In business, it just kills quarters — but the principle is the same.

Weather is data, not opinion

Pilots check weather obsessively, and they do it from structured, standardised sources. A METAR is a snapshot of current conditions at an airport — wind, visibility, cloud layers, temperature, pressure — encoded in a terse format that hasn’t changed in decades. A TAF is a forecast for the next 24 to 30 hours. Radar imagery shows precipitation cells. PIREPs (pilot reports) tell you what someone actually experienced at a given altitude and location ten minutes ago.

None of this is gut feeling. Nobody looks out the window of the FBO and says “looks fine to me.” You pull the data, you read the data, you make the decision. If the TAF shows a 200-foot ceiling and half a mile visibility at your destination and you’re not instrument-rated and current, you don’t go. Full stop. The data decides.

I find it remarkable how often business leaders — people who would never dream of flying into a thunderstorm — make consequential product or market decisions on instinct. “I think customers want this.” “The market feels like it’s turning.” Those are feelings, not METARs. Your analytics dashboards, your cohort data, your NPS trends, your support ticket clustering — those are your weather reports. Read them with the same discipline a pilot reads a briefing. When the data says don’t launch, don’t launch, no matter how good the demo felt internally.

The plan is not the flight

I file an IFR (Instrument Flight Rules) flight plan before every cross-country trip. It specifies my route, altitude, speed, fuel endurance, and estimated time en route down to the minute. It’s precise and detailed and wrong almost immediately after takeoff.

Wind at altitude is never exactly what the forecast predicted. Air traffic control reroutes you around other traffic or weather. You pick up a headwind component that burns fuel faster than planned. The GPS shows your ground track drifting left of course. So you correct. You adjust heading, recalculate fuel, request a different altitude. This is completely normal. Nobody panics. Deviation from plan is not failure — it’s physics.

Yet in business, missing the quarterly plan by even a small margin triggers alarm. Teams treat the plan as a commitment rather than what it actually is: a best estimate filed before departure. The real skill is the continuous correction. How quickly do you detect the deviation? How accurately do you diagnose its cause? How decisively do you adjust? A GPS updates your position every second. Your business instrumentation should give you something close to the same cadence for the metrics that matter.

The best product organisations I’ve built operated this way — weekly check-ins against the trajectory, not quarterly post-mortems against a stale plan.

Your cockpit instruments are your KPIs

The Cirrus Perspective+ avionics suite puts an enormous amount of information in front of me at all times. Airspeed, altitude, heading, vertical speed, engine CHT and EGT (cylinder head and exhaust gas temperatures), fuel flow, fuel remaining, GPS ground track, terrain proximity, traffic in the vicinity, weather overlay. Each instrument answers a specific question. Together they create situational awareness — a continuously updated mental model of where I am, where I’m going, and what’s happening around me.

Most companies I’ve worked with have worse instrumentation than my single-engine airplane. They can tell you last month’s revenue but not today’s deployment failure rate. They know their headcount but not their cycle time. They track OKRs quarterly but can’t see a customer health trend changing week over week. That’s like flying with an altimeter and nothing else — you know how high you are, but not whether you’re about to fly into a mountain.

Building the right instrument panel for a product organisation is one of the highest-leverage things a technology leader can do. Not a dashboard with fifty charts nobody reads — a curated set of instruments that answer the questions you actually need answered to make decisions. Revenue. Burn rate. Deployment frequency. Incident rate. Customer activation. Pipeline coverage. Fuel remaining.

Checklists are not for beginners

I have flown hundreds of hours in the SR22T. I could recite the before-takeoff checklist from memory. I run it from the printed card every single time anyway.

This is not a training-wheels habit. It’s a deliberate practice rooted in decades of accident investigation. The aviation industry learned — through crashes, through deaths — that human memory is unreliable under load. Experienced pilots skip steps not because they’re careless but because the brain, under even mild stress or distraction, prunes what it considers routine. Checklists catch what confidence misses.

I’ve brought this discipline into every engineering organisation I’ve run. Deployment checklists. Incident response runbooks. Customer onboarding sequences. Launch readiness reviews. Not because the team doesn’t know what to do, but because “knowing what to do” and “reliably doing it every time under pressure” are two completely different things. Atul Gawande’s The Checklist Manifesto made this case for surgery. Aviation made it decades earlier. Software engineering is still catching up.

Error culture is not optional — it’s how you learn

Aviation has the most effective error culture of any industry I’ve encountered. When something goes wrong — an incident, a near-miss, a procedural deviation — it gets reported, investigated, and shared. Not to assign blame, but to prevent recurrence. The entire system is built on the premise that humans make mistakes, and the only way to improve safety is to talk about those mistakes openly.

Every accident investigation report from the NTSB or BFU reads the same way: a chain of events, each one individually survivable, that combined into catastrophe because nobody caught the sequence in time. The reports don’t ask “who screwed up?” They ask “what conditions allowed this to happen, and how do we change those conditions?” That’s why commercial aviation is as safe as it is — not because pilots are perfect, but because the industry learned to treat every error as data rather than disgrace.

In business, most organisations do the opposite. Mistakes get buried, explained away, or quietly attributed to someone who then gets sidelined. The result is that the organisation never learns. The same errors recur because nobody documented them, nobody shared them, nobody changed the system that produced them.

The best engineering cultures I’ve built borrowed directly from aviation. Blameless post-mortems after incidents. Near-miss reporting without consequences. Shared learning sessions where teams present their failures alongside their successes. This only works when leadership genuinely models it — when the CTO stands up and says “here’s a decision I got wrong last quarter and what I learned from it.” The moment error reporting carries career risk, it stops. And when it stops, you lose the only reliable mechanism for organisational learning.

Go/no-go is a framework, not a feeling

The hardest decision in flying is the go/no-go. The weather is marginal. The ceiling is reported at 800 feet but the TAF says it might drop to 500. There’s a SIGMET (significant meteorological advisory) for moderate turbulence along your route. You have the rating and the equipment to fly in these conditions, technically. Do you go?

Good pilots don’t make this decision emotionally. They have personal minimums — conditions below which they simply will not fly, regardless of what’s legal or what the airplane can handle. They have alternates planned. They evaluate the full risk picture systematically: weather trend improving or deteriorating? Pilot fatigue level? Complexity of the approach? Passenger expectations creating pressure?

The best business decisions I’ve made followed exactly this structure. Not “do I feel good about this market entry” but: what are the conditions? What’s our minimum threshold for proceeding? What’s the alternate if conditions deteriorate after we commit? What pressures — board expectations, sunk costs, competitive anxiety — might be biasing me toward “go” when the data says “wait”?

I’ve cancelled flights that I probably could have completed safely. I’ve killed product initiatives that might have worked out. In both cases, the discipline is the same: when the risk-reward doesn’t clear your personal minimums, you stay on the ground. There’s always another day to fly, another quarter to ship.


Flying doesn’t make you a better leader by some vague process of character building. It makes you better because it forces you, repeatedly and consequentially, to do the things that good leadership requires: define your target precisely, gather data instead of opinions, correct continuously instead of panicking at deviation, maintain situational awareness through disciplined instrumentation, follow checklists even when you think you don’t need them, treat every error as a learning opportunity rather than a failure, and make go/no-go decisions with a framework rather than a feeling.

The cockpit just happens to be a place where the feedback loop is immediate and the consequences of getting it wrong are unambiguous. Business gives you more room to be sloppy. That’s not an advantage — it’s a trap.

flying leadership decision-making checklists data-driven cirrus aviation