The Architecture Litmus Test: Can You Code on a Plane?

Software ArchitectureOctober 14, 2025

If you can't run and debug your service with just a database, message queue and seed data—no internet, no VPN, no other services—your software architecture is broken.

The Problem

We build separate deployable services for independence. Then we connect them with synchronous HTTP calls for everything—reads, writes, commands. We've recreated the coupling we were trying to escape—except now it's distributed, slower and fails in more interesting ways.

Every synchronous call is a dependency. Every dependency is a coupling. Every coupling is a place where things break together instead of separately.

The Plane Test

Here's a simple test for your architecture: Can a developer work on your service during a 10-hour flight with no internet?

Not "can they write code"—can they:

  • Start the service locally
  • Run the full test suite
  • Debug a feature end-to-end
  • Add new functionality and verify it works

If the answer is no, you don't have a service. You have a highly coupled system with extra steps.

True service independence means you can develop, test and debug without any external dependencies beyond your data store and message queue (which you can run in-memory). Inputs and outputs use message passing—easy to test and easy to simulate all possible scenarios.

What This Looks Like in Practice

Broken: Synchronous Service Calls

Your authentication service needs to handle a BankID sign-in. But first:

// Auth service - tightly coupled to 4 other services
async function handleBankIdSignIn(sessionId: string) {
  // Call user service to check if user exists
  const user = await http.get('user-service/users/by-ssn/' + sessionId);

  // Call profile service to get user profile
  const profile = await http.get('profile-service/profiles/' + user.id);

  // Call permissions service to get user roles
  const permissions = await http.post('permissions-service/check', {
    userId: user.id,
    resource: 'banking'
  });

  // Call audit service to log the sign-in
  await http.post('audit-service/events', {
    userId: user.id,
    event: 'bankid.signin',
    timestamp: new Date()
  });

  // Finally, create the session
  const session = await db.sessions.insert({
    userId: user.id,
    permissions: permissions.roles,
    expiresAt: new Date(Date.now() + 3600000)
  });

  // Call notification service to send confirmation
  await http.post('notification-service/send', {
    userId: user.id,
    type: 'signin-confirmation'
  });

  return session;
}

// Result:
// - Can't run locally without 4 other services running
// - Can't test without mocking 4 HTTP clients
// - Can't develop on a plane
// - One service down = entire sign-in flow breaks

Better: Events and Local Data

Instead of calling other services synchronously, your auth service maintains local copies of the data it needs and communicates via events:

// Auth service - independent and self-contained
async function handleBankIdSignIn(sessionId: string) {
  // Everything in a transaction - ACID guarantees
  return await db.transaction(async (tx) => {
    // Use local read models (populated from events)
    const user = await tx.users.findOne({ ssn: sessionId });
    const profile = await tx.profiles.findOne({ userId: user.id });
    const permissions = await tx.permissions.findOne({ userId: user.id });

    // Create the session locally
    const session = await tx.sessions.insert({
      userId: user.id,
      permissions: permissions.roles,
      expiresAt: new Date(Date.now() + 3600000)
    });

    // Store event in outbox table - part of the same transaction
    await tx.outbox.insert({
      eventType: 'auth.signin-completed',
      payload: {
        userId: user.id,
        sessionId: session.id,
        method: 'bankid',
        timestamp: new Date()
      }
    });

    // If anything fails, entire transaction rolls back
    return session;
  });
}

// Separate process publishes events from outbox
// Integrations that can't guarantee transactions stay at the edges
async function processOutbox() {
  const events = await db.outbox.findPending();
  for (const event of events) {
    await messageQueue.publish(event.eventType, event.payload);
    await db.outbox.markPublished(event.id);
  }
}

// Separate event handlers update local data (or use a CDC tool)
events.on('user.created', async (event) => {
  await db.users.upsert({
    id: event.data.id,
    ssn: event.data.ssn,
    email: event.data.email
  });
});

events.on('profile.updated', async (event) => {
  await db.profiles.upsert({
    userId: event.data.userId,
    name: event.data.name,
    phone: event.data.phone
  });
});

events.on('permissions.changed', async (event) => {
  await db.permissions.upsert({
    userId: event.data.userId,
    roles: event.data.roles
  });
});

// Result:
// ✓ Runs locally with just database and seed data
// ✓ Tests without mocking external services
// ✓ Develops on a plane
// ✓ Other services down? Sign-in still works

The Real Benefits

1. Developer Velocity

Your developers spend their time building features, not:

  • Debugging network timeouts
  • Maintaining elaborate mock configurations
  • Waiting for other teams to fix their services
  • Fighting with VPN connections

2. Testability

Your test suite actually runs. Fast. Without flakiness. And it's deterministic.

Testing with synchronous HTTP calls:

describe('BankID sign-in', () => {
  beforeEach(() => {
    // Mock 4 different HTTP clients
    mockUserService.getUser.mockReturnValue({ id: 1, ssn: '123456' });
    mockProfileService.getProfile.mockReturnValue({ name: 'Test User' });
    mockPermissionsService.check.mockReturnValue({ roles: ['user'] });
    mockAuditService.log.mockReturnValue({ logged: true });
    mockNotificationService.send.mockReturnValue({ sent: true });
  });

  // Hope your mocks match reality
  it('handles BankID sign-in', async () => {
    // Test against mocks, not real behavior
  });
});

Testing with message passing and local data:

describe('BankID sign-in', () => {
  beforeEach(async () => {
    // Seed local database with test data
    await db.users.insert({ id: 1, ssn: '123456', email: 'test@example.com' });
    await db.profiles.insert({ userId: 1, name: 'Test User', phone: '+46701234567' });
    await db.permissions.insert({ userId: 1, roles: ['user', 'customer'] });
  });

  // Test against real code paths with real data
  it('handles BankID sign-in', async () => {
    const session = await handleBankIdSignIn('123456');

    // Verify actual database state
    expect(session.userId).toBe(1);
    expect(session.permissions).toEqual(['user', 'customer']);

    // Verify events were published
    expect(publishedEvents).toContainEqual({
      type: 'auth.signin-completed',
      data: expect.objectContaining({
        userId: 1,
        method: 'bankid'
      })
    });
  });
});

"Just use mocks!" you say. Sure. Now maintain those mocks as the actual services evolve. Keep them in sync. Debug why your tests pass but production fails. Explain to your new developers why the mock returns different data than the real service.

Mocks test that your code can call other code. They don't test that your system actually works. You're testing the shape of the integration, not the behavior.

With local read models, you're testing against real data in a real database with real constraints and real transactions. Your tests exercise the same code paths production uses. When tests pass, you have actual confidence.

But the real power of message passing isn't just avoiding mocks. It's that you can simulate every possible state your service might encounter. Corrupt data? Missing permissions? Race conditions between events? Just seed your database with that exact state and verify your service handles it correctly. With HTTP chains, you're limited to testing whatever states the other services happen to expose—and coordinating those states across multiple services is nearly impossible. If you want even more powerful testing, use property-based testing and define sequences of events to verify your service handles any combination correctly.

3. Performance

Let's talk about what actually happens when you chain synchronous HTTP calls:

// Synchronous approach - sequential blocking calls
async function handleBankIdSignIn(sessionId: string) {
  const user = await http.get('user-service/...');        // 50ms
  const profile = await http.get('profile-service/...');  // 50ms
  const permissions = await http.post('permissions/...'); // 80ms
  await http.post('audit-service/...');                   // 30ms
  const session = await db.sessions.insert(...);          // 10ms
  await http.post('notification-service/...');            // 40ms

  return session; // Total: 260ms per sign-in
}

// With 1000 concurrent users signing in:
// Total time: 260,000ms = 4.3 minutes (if you're lucky)
// Reality: Much worse due to connection pool exhaustion

// Event-driven approach - local reads, async writes
// (Could also be a single query!)
async function handleBankIdSignIn(sessionId: string) {
  return await db.transaction(async (tx) => {
    const user = await tx.users.findOne(...);           // 2ms
    const profile = await tx.profiles.findOne(...);     // 2ms
    const permissions = await tx.permissions.findOne(...); // 2ms
    const session = await tx.sessions.insert(...);      // 3ms
    await tx.outbox.insert(...);                        // 2ms

    return session; // Total: 11ms per sign-in
  });
}

// With 1000 concurrent users:
// Total time: 11,000ms = 11 seconds
// That's 23x faster

But it gets worse. When all your services hit the same legacy integration service for every request:

  • That integration service becomes your bottleneck - every GET and POST queues up
  • Connection pools exhaust across all services simultaneously
  • Network latency and serialization overhead compounds with each hop
  • Database connection pools saturate as every HTTP call holds a connection while waiting for other HTTP calls
You spent millions building a new system for scalability, then made every request wait on synchronous calls to the slowest service in your architecture.

4. Resilience

Services truly operate independently. When the audit service is down:

  • Synchronous: BankID sign-in fails entirely - users can't log in
  • Event-driven: Sign-in completes successfully using local data. Audit service processes events when it recovers.

Common Objections

"But eventual consistency!"

Yes. Your data will be eventually consistent. You know what else is eventually consistent? Your current system when services time out, when deployments roll, when databases lag.

The difference is you're designing for it instead of pretending it doesn't happen.

"What about data duplication?"

You're already duplicating data—in caches, in CDNs, in read replicas. This is just being honest about it.

Each service maintains the specific view of data it needs. Not the entire user table—just the fields required for its domain logic.

"What if the data gets out of sync?"

Then you have monitoring and reconciliation processes. Same as you should have for any distributed system. There's no free lunch.

The difference is failures are visible and recoverable, not hidden behind retry logic and circuit breakers that mask data inconsistencies.

The Reality Check

Look at your current architecture. Count how many services your auth service calls synchronously. Now imagine they're all down except yours.

Can users still sign in? Even with degraded functionality?

If the answer is no, you don't have a well-architected system. You have a highly coupled system that's harder to deploy, harder to debug and fails in more creative ways than a modular monolith.

The entire point of separate services is independent deployability and failure isolation. If you need five services running to test one, you've failed at both.

The Path Forward

Start small. Pick one service. One with lots of synchronous dependencies and a frustrated team.

Step 1: Identify what data this service actually needs from others

Step 2: Create local read models for that data

Step 3: Subscribe to events from other services to keep those read models updated

Step 4: Replace synchronous calls with local data reads

Step 5: Publish events instead of calling other services

Now try the plane test. Clone the repo, start the database, seed some data and run the service. If it works, you've built something that's actually independent.

The Bottom Line

Synchronous service calls are the new GOTO. They work. They're familiar. They're also a maintainability nightmare that creates coupling you spent millions trying to eliminate.

If you can't code on a plane, your architecture is working against you. Fix it before you add the next service.

Your developers will thank you. Your operations team will thank you. And that next production incident? It'll affect one service instead of seventeen.

(This blog post might scare you and it should. Distributed systems are hard. Don't use them unless you need to.)