Skip to main content

Standard Streams in Node.js: How Processes Communicate

Have you ever wondered how command-line programs receive input from your keyboard and display output on your screen? Or how data flows between different programs when you chain commands together? The answer lies in a fundamental concept called standard streams.

Let's discover how every program running on your computer uses three special channels to communicate with the outside world, and how you can harness these channels to build powerful Node.js applications.

Quick Reference

When to use: Building CLI tools, processing data pipelines, handling user input, redirecting output to files

Basic syntax:

process.stdin.on("data", (chunk) => {
/* handle input */
});
process.stdout.write("output message");
process.stderr.write("error message");

Common patterns:

  • Reading user input from terminal
  • Writing program output to console or files
  • Piping data between processes
  • Logging errors separately from normal output

What You Need to Know First

This is a foundational article—you don't need deep Node.js knowledge to understand standard streams.

However, you should be comfortable with:

  • JavaScript basics: Variables, functions, and basic syntax
  • Node.js installation: Having Node.js installed on your computer
  • Terminal basics: How to open a terminal and run commands

If you're completely new to Node.js, consider reading an introduction to Node.js first.

What We'll Cover in This Article

By the end of this guide, you'll understand:

  • What standard streams are and why every process has them
  • The three types of streams: stdin, stdout, and stderr
  • How to read input from users in Node.js
  • How to write output and errors programmatically
  • How to pipe data between streams and files
  • How to redirect streams using command-line operators

What We'll Explain Along the Way

We'll introduce these concepts with full explanations:

  • File descriptors (the numbers behind streams)
  • Duplex streams (reading and writing simultaneously)
  • Child processes (running programs from Node.js)
  • Stream piping vs manual data handling

The Three Communication Channels

Every running process—whether it's a Node.js application, a bash command, or any program on your computer—has three communication channels automatically created for it. Think of these as three pipes connecting your program to the outside world.

Let's explore what these channels are and why they exist.

Standard Input (stdin): The Entry Point

Standard input, or stdin, is where data flows into your process.

Imagine you're typing commands in a terminal. Where does that text go? It flows through stdin into the program that's currently running. When you type a command like:

$ cat file.txt

Your keyboard input travels through stdin to the cat command. By default, stdin is connected to your keyboard, waiting for you to type something.

File descriptor: 0

Standard Output (stdout): The Normal Output Channel

Standard output, or stdout, is where your process sends its regular results and messages.

When a program wants to show you something—like displaying the contents of a file or printing calculation results—it writes to stdout. By default, stdout connects to your terminal display, which is why you see output appear on your screen.

File descriptor: 1

Standard Error (stderr): The Error Channel

Standard error, or stderr, is dedicated exclusively to error messages and diagnostic information.

You might wonder: "Why have a separate channel for errors? Can't we just use stdout for everything?" Great question! Let's discover why this separation is brilliant.

Imagine you're running a program and want to save its output to a file. If errors and normal output mix together in stdout, your file would contain both results and error messages jumbled together. But with separate channels, you can:

  • Save normal output to one file
  • Save errors to a different file
  • Display errors on screen while redirecting output elsewhere

File descriptor: 2

The File Descriptor Numbers

Notice those numbers? Every stream has a file descriptor—a number the operating system uses to identify and track the stream.

// In Node.js, you can see these descriptors:
console.log("stdin fd:", process.stdin.fd); // 0
console.log("stdout fd:", process.stdout.fd); // 1
console.log("stderr fd:", process.stderr.fd); // 2

These numbers come from Unix tradition and are the same across all operating systems. You'll see these numbers referenced when redirecting streams (we'll explore that soon).

Streams in Action: A Bash Example

Let's see standard streams working in a real command. When you run:

$ cat file.txt

Here's what happens behind the scenes:

  1. The cat command starts and gets its three streams automatically
  2. stdin waits for input (in this case, it doesn't need keyboard input)
  3. cat reads from file.txt
  4. stdout receives the file contents from cat
  5. Your terminal displays what stdout received

By default, stdout connects to your terminal screen, so you see the file contents displayed.

Redirecting Streams: Changing Where Data Goes

Now here's where things get interesting. You can redirect these streams to send data to different places. Let's explore how.

Redirecting stdout to a file:

$ cat file.txt > output.txt

The > operator says: "Take stdout and send it to this file instead of the screen." After running this command, output.txt contains whatever cat would have displayed on your terminal.

Redirecting stderr to a file:

$ cat nonexistent.txt 2> error.log

The 2> operator specifically redirects stderr (remember, file descriptor 2?) to a file. If nonexistent.txt doesn't exist, the error message goes into error.log instead of appearing on your screen.

Piping stdout to another process:

$ cat file.txt | grep "search term"

The | operator (called a pipe) connects the stdout of one command to the stdin of another. Data flows from cat directly into grep, which then searches for your term. This is how you chain commands together to build powerful data processing pipelines.

Standard Streams in Node.js

Node.js gives you direct access to these same three streams through the global process object. Let's discover how to use them.

Accessing the Streams

// These are available in every Node.js program:
process.stdin; // Readable stream for standard input
process.stdout; // Writable stream for standard output
process.stderr; // Writable stream for standard error

Notice the terms "readable" and "writable"? This tells us how each stream works:

  • stdin is readable - you read data from it
  • stdout is writable - you write data to it
  • stderr is writable - you write data to it

Technically, these are duplex streams (they can do both reading and writing), but in practice, you use stdin for reading and the others for writing.

Writing to stdout: Your First Output

Let's start simple. Writing to stdout is how you display messages in your program.

// write.ts
// Purpose: Display a message on the terminal
// This is the most basic output operation

process.stdout.write("Hello from stdout!");

When you run this with node write.ts, you'll see:

Hello from stdout!

Notice something? There's no newline character at the end. The text appears, and your terminal prompt continues right after it. To add a newline, include \n:

process.stdout.write("Hello from stdout!\n");

The console.log Connection

Here's a secret: console.log() is actually built on top of process.stdout.write()!

// These two lines do exactly the same thing:
console.log("Hello");
process.stdout.write("Hello\n");

When you call console.log(), Node.js automatically:

  1. Calls process.stdout.write() with your message
  2. Adds a newline character (\n) at the end
  3. Handles formatting if you pass multiple arguments

So every time you've used console.log(), you've been using stdout! The official Node.js documentation confirms this implementation detail.

Reading from stdin: Capturing User Input

Now let's discover how to read data flowing into your program. When data arrives through stdin, Node.js emits a 'data' event. You can listen for this event:

// input-echo.ts
// Purpose: Echo back whatever the user types
// Demonstrates: Reading from stdin, writing to stdout

process.stdin.on("data", (chunk: Buffer) => {
// The 'data' event fires whenever input arrives
// chunk is a Buffer containing the input data

console.log("You typed:", chunk.toString());
process.stdout.write(chunk);
});

Let's break down what happens:

  1. You run the program: node input-echo.ts
  2. The program starts listening for stdin data
  3. You type something and press Enter
  4. The 'data' event fires with your input as a Buffer
  5. The callback executes and processes the input

Try it! Run this program, type something, and press Enter. You'll see your input echoed back.

What's a Buffer? It's Node.js's way of handling binary data. We'll explore Buffers in detail in another article, but for now, just know that chunk.toString() converts the Buffer to a readable string.

Writing to stderr: Handling Errors Properly

When something goes wrong, use stderr to communicate it. This keeps errors separate from your program's normal output.

// error-example.ts
// Purpose: Demonstrate proper error logging
// Shows: Difference between stdout and stderr

// Normal program output
process.stdout.write("Starting process...\n");

// Something goes wrong
const configFile = "config.json";
process.stderr.write(`Error: Could not find ${configFile}\n`);

// Try to continue
process.stdout.write("Attempting recovery...\n");

When you run this:

$ node error-example.ts
Starting process...
Error: Could not find config.json
Attempting recovery...

Both messages appear on your terminal. But watch what happens when you redirect stdout:

$ node error-example.ts > output.txt
Error: Could not find config.json

The error still appears on your screen (because stderr wasn't redirected), but the normal messages went to the file. Check output.txt:

Starting process...
Attempting recovery...

See the power of separation? You can redirect normal output while keeping errors visible.

Building Interactive CLI Applications

Let's put this knowledge together and build something practical. Here's a simple interactive program that reads user input and responds:

// interactive-cli.ts
// Purpose: Create a simple command-line chat program
// Demonstrates: Real-world use of stdin and stdout

console.log("Welcome to the Echo Chat!");
console.log('Type something and press Enter. Type "exit" to quit.\n');

process.stdin.on("data", (chunk: Buffer) => {
const input = chunk.toString().trim();

// Check if user wants to exit
if (input.toLowerCase() === "exit") {
console.log("Goodbye!");
process.exit(0);
}

// Echo back with decoration
console.log(`Echo: ${input}`);
});

// Handle stream end (Ctrl+D on Unix, Ctrl+Z on Windows)
process.stdin.on("end", () => {
console.log("\nStream ended. Goodbye!");
process.exit(0);
});

This creates a basic interactive loop. Try it out! You'll notice:

  • Each line you type gets echoed back
  • Typing "exit" closes the program
  • Pressing Ctrl+D (Unix/Mac) or Ctrl+Z (Windows) also ends the program

Working with Files: Practical Stream Operations

Now let's discover how to save stdin data to a file. This is useful for capturing user input, processing data from other programs, or building data pipelines.

Manual Approach: Handling Each Chunk

// save-input-manual.ts
// Purpose: Save everything user types to a file
// Method: Manually handle each data chunk

import fs from "fs";

// Create a writable stream to a file
const outputFile = fs.createWriteStream("output.txt");

console.log(
"Type your text. Press Ctrl+D (Unix) or Ctrl+Z (Windows) when done."
);

// Listen for input
process.stdin.on("data", (chunk: Buffer) => {
// Each time data arrives, write it to the file
outputFile.write(chunk);
console.log(`Saved ${chunk.length} bytes to file`);
});

// Clean up when input ends
process.stdin.on("end", () => {
outputFile.end();
console.log("File saved successfully!");
});

// Handle errors
outputFile.on("error", (error) => {
process.stderr.write(`Error writing file: ${error.message}\n`);
process.exit(1);
});

This works, but there's a more elegant way using pipes.

The Pipe Approach: Connecting Streams Directly

Pipes let you connect streams directly without manually handling each chunk. It's like connecting two hoses—data flows automatically.

// save-input-pipe.ts
// Purpose: Save stdin to a file using pipes
// Method: Direct stream connection (more efficient)

import fs from "fs";

const outputFile = fs.createWriteStream("output.txt");

console.log(
"Type your text. Press Ctrl+D (Unix) or Ctrl+Z (Windows) when done."
);

// Connect stdin directly to the file
// Data flows automatically from stdin to the file
process.stdin.pipe(outputFile);

// Notify when done
outputFile.on("finish", () => {
console.log("File saved successfully!");
});

Much cleaner! The pipe() method:

  • Automatically handles data transfer
  • Manages backpressure (when the destination is slower than the source)
  • Propagates errors properly
  • Cleans up resources when done

When to use pipe vs manual handling:

  • Use pipe when you just want to transfer data from A to B
  • Use manual handling when you need to transform or inspect data as it flows

Understanding Duplex Streams in Practice

I mentioned earlier that standard streams are technically duplex—they can read and write. Let's discover when this capability matters.

Working with Child Processes

When you spawn a child process from Node.js, you interact with its stdin and stdout simultaneously. This is where the duplex nature becomes apparent.

// child-process-demo.ts
// Purpose: Run a child process and communicate with it
// Demonstrates: Bidirectional stream communication

import { spawn } from "child_process";

// Start the 'cat' command as a child process
// 'cat' without arguments reads from stdin and echoes to stdout
const child = spawn("cat");

// Listen to the child's stdout (reading)
child.stdout.on("data", (chunk: Buffer) => {
console.log("Child process output:", chunk.toString());
});

// Listen for errors from the child's stderr
child.stderr.on("data", (chunk: Buffer) => {
console.error("Child process error:", chunk.toString());
});

// Write to the child's stdin (writing)
child.stdin.write("Hello from parent process\n");
child.stdin.write("This goes into the child's stdin\n");

// Close the child's stdin after 1 second
setTimeout(() => {
child.stdin.end();
console.log("Closed child stdin");
}, 1000);

// Handle child process exit
child.on("close", (code: number) => {
console.log(`Child process exited with code ${code}`);
});

Notice how we're simultaneously:

  • Writing to child.stdin (sending data to the child)
  • Reading from child.stdout (receiving data from the child)

This bidirectional communication is what makes child processes powerful. You can build pipelines where processes talk to each other.

Stream Redirection in the Command Line

Let's explore the powerful redirection operators you can use in your terminal. These work on Unix/Linux/Mac and in Windows PowerShell or WSL.

Basic Redirection Operators

Overwrite a file with stdout:

$ node script.ts > output.txt

The > operator redirects stdout to a file, creating it if it doesn't exist, or overwriting it if it does.

Append to a file:

$ node script.ts >> output.txt

The >> operator appends stdout to a file without erasing existing content.

Redirect stderr only:

$ node script.ts 2> errors.txt

The 2> operator redirects only stderr (file descriptor 2). Normal output still goes to the screen.

Redirect both stdout and stderr to the same file:

$ node script.ts > output.txt 2>&1

This says: "Redirect stdout to output.txt, then redirect stderr to wherever stdout is going." The 2>&1 means "send file descriptor 2 to where file descriptor 1 is going."

Split output and errors:

$ node script.ts > output.txt 2> errors.txt

Normal output goes to output.txt, errors go to errors.txt.

Redirection Reference Table

OperatorMeaningExample
>Overwrite file with stdoutnode app.ts > log.txt
>>Append stdout to filenode app.ts >> log.txt
2>Redirect stderr onlynode app.ts 2> errors.txt
2>&1Redirect stderr to stdoutnode app.ts > all.txt 2>&1
&>Shortcut for > file 2>&1 (Bash)node app.ts &> all.txt
|Pipe stdout to another commandnode app.ts | grep "error"
<Read stdin from filenode app.ts < input.txt

Redirecting Input: Reading from Files

You can also redirect a file into a program's stdin:

$ node process-data.ts < input.txt

This sends the contents of input.txt into your program's stdin. Your program can read it using process.stdin:

// process-data.ts
// Purpose: Process data from stdin (could be keyboard or file)

process.stdin.on("data", (chunk: Buffer) => {
const text = chunk.toString();
const wordCount = text.split(/\s+/).length;
console.log(`Processed ${wordCount} words`);
});

Run it with a file:

$ node process-data.ts < input.txt
Processed 42 words

Or type directly (try it!):

$ node process-data.ts
hello world test
Processed 3 words

Building Data Processing Pipelines

Now let's see the real power of streams: chaining programs together to process data step by step.

Example: Multi-Stage Processing

Imagine you have a log file and want to:

  1. Extract lines containing "ERROR"
  2. Count how many errors occurred
  3. Save the count to a file

Using pipes:

$ cat app.log | grep "ERROR" | wc -l > error-count.txt

Let's break this down:

  • cat app.log reads the file and sends contents to stdout
  • | pipes that stdout into grep's stdin
  • grep "ERROR" filters lines, sends matches to stdout
  • | pipes filtered lines into wc's stdin
  • wc -l counts lines, sends count to stdout
  • > redirects that count into error-count.txt

Each program does one thing well, and they work together through standard streams.

Creating Pipeline-Friendly Node.js Scripts

Want your Node.js scripts to work in pipelines? Follow these principles:

// pipeline-friendly.ts
// Purpose: Process data in a pipeline-friendly way
// Principle: Read stdin, write stdout, log to stderr

// Log progress to stderr (not stdout, so it doesn't pollute data)
process.stderr.write("Starting data processor...\n");

let lineCount = 0;

process.stdin.on("data", (chunk: Buffer) => {
const lines = chunk.toString().split("\n");

lines.forEach((line) => {
if (line.trim()) {
lineCount++;
// Process and output to stdout (for next program in pipeline)
process.stdout.write(line.toUpperCase() + "\n");
}
});
});

process.stdin.on("end", () => {
// Final statistics go to stderr
process.stderr.write(`Processed ${lineCount} lines\n`);
});

Now you can use it in a pipeline:

$ cat input.txt | node pipeline-friendly.ts | node another-processor.ts

Notice how:

  • Data flows through stdout (for the next program)
  • Progress messages go to stderr (visible to you, not the next program)
  • The script doesn't care where input comes from (keyboard, file, or another program)

Common Misconceptions

❌ Misconception: stdout and stderr are the same thing

Reality: They're separate channels with different purposes.

Why this matters: If you write errors to stdout, they'll mix with your program's normal output. When someone redirects your output to a file, errors go there too—making files contain mixed results and error messages.

Example:

// ❌ Wrong: Mixing errors with normal output
console.log("Processing file...");
console.log("Error: File not found"); // Goes to stdout!
console.log("Done");

// ✅ Correct: Separate error channel
console.log("Processing file...");
process.stderr.write("Error: File not found\n"); // Goes to stderr
console.log("Done");

When redirected:

$ node script.ts > output.txt
Error: File not found # Still visible! (stderr not redirected)

The error appears on screen while normal output goes to the file.

❌ Misconception: You can directly access ArrayBuffer-like data in streams

Reality: Stream data comes as Buffers (in Node.js) which you need to convert.

Why this matters: Trying to access stream data like an array will fail.

Example:

// ❌ Wrong: Treating chunk as a string
process.stdin.on("data", (chunk) => {
console.log(chunk[0]); // Prints a number (byte value), not a character!
});

// ✅ Correct: Convert Buffer to string first
process.stdin.on("data", (chunk: Buffer) => {
const text = chunk.toString(); // Now it's a string
console.log(text[0]); // Prints first character
});

❌ Misconception: process.stdin works like readline or prompt libraries

Reality: process.stdin gives you raw chunks of data, not line-by-line input.

Why this matters: For interactive CLI apps with complex input, you'll want to use libraries like readline or inquirer built on top of stdin.

Example:

// ❌ Expecting line-by-line behavior
process.stdin.on("data", (chunk) => {
// chunk might be partial data, not always complete lines!
console.log("Line:", chunk.toString());
});

// ✅ Use readline for line-by-line reading
import readline from "readline";

const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});

rl.on("line", (line) => {
// Now you get complete lines
console.log("Complete line:", line);
});

Troubleshooting Common Issues

Problem: Data appears but program doesn't exit

Symptoms: Your program processes input but hangs after finishing.

Common Causes:

  1. stdin is still open (70% of cases)
  2. Event listeners keep the process alive (20% of cases)
  3. Unclosed file handles or timers (10% of cases)

Diagnostic Steps:

// Step 1: Check if stdin is keeping the process alive
process.stdin.on("data", (chunk) => {
console.log("Data:", chunk.toString());

// If this is your last operation, explicitly close stdin
process.stdin.pause();
});

// Step 2: Set a timeout to see if anything is keeping process alive
setTimeout(() => {
console.log("Still running...");
}, 5000);

// Step 3: Explicitly exit when done
process.stdin.on("end", () => {
console.log("Stream ended");
process.exit(0); // Force exit
});

Solution: Either close/pause stdin when done, or explicitly call process.exit().

Prevention: Plan your program's exit strategy. Decide when input is "done" and close resources explicitly.

Problem: Redirection doesn't work as expected

Symptoms: node script.ts > output.txt but file is empty or contains unexpected content.

Common Causes:

  1. Writing to stderr instead of stdout (80% of cases)
  2. Buffer not flushed before exit (15% of cases)
  3. Asynchronous operations not completing (5% of cases)

Diagnostic Steps:

// Step 1: Verify which stream you're using
process.stdout.write("This goes to stdout\n");
process.stderr.write("This goes to stderr\n");

// Run with: node script.ts > output.txt
// Check: Only the stdout message appears in the file

// Step 2: Ensure writes complete
process.stdout.write("Important data");
// Without \n, might not flush immediately

// Step 3: Wait for async operations
setTimeout(() => {
process.stdout.write("Delayed output\n");
process.exit(0);
}, 1000);

Solution: Always use stdout for data output, stderr for messages. Add \n to flush buffers.

Prevention: Use console.log() (which adds \n automatically) or explicitly add newlines to process.stdout.write().

Problem: "stdin is not a TTY" error

Symptoms: Program crashes with TTY-related errors when reading input.

Common Causes: Input is being piped from a file or another program, but your code assumes interactive terminal input.

Solution:

// Check if stdin is interactive (a TTY) before using TTY features
if (process.stdin.isTTY) {
// Interactive mode: can use features like readline
console.log("Running in interactive mode");
} else {
// Piped/redirected mode: simple data reading
console.log("Reading from pipe or file");
}

process.stdin.on("data", (chunk) => {
// This works in both modes
console.log(chunk.toString());
});

Prevention: Build programs that work both interactively and in pipelines. Test both scenarios.

Check Your Understanding

Quick Quiz

  1. Which stream should you use for error messages?

    Show Answer

    Use process.stderr for errors. This keeps errors separate from normal output, allowing users to redirect output to files while still seeing errors on screen.

    // ✅ Correct
    process.stderr.write("Error: Configuration invalid\n");

    // ❌ Wrong
    console.log("Error: Configuration invalid"); // Goes to stdout!
  2. What's the difference between > and >>?

    Show Answer
    • > overwrites the file completely
    • >> appends to the existing file
    $ echo "first" > file.txt   # file.txt: "first"
    $ echo "second" > file.txt # file.txt: "second" (overwrote)
    $ echo "third" >> file.txt # file.txt: "second\nthird" (appended)
  3. What's wrong with this code?

    process.stdin.on("data", (chunk) => {
    console.log("You typed:", chunk);
    });
    Show Answer

    The chunk is a Buffer, not a string. Printing it directly shows something like <Buffer 68 65 6c 6c 6f> instead of the actual text.

    // ✅ Correct: Convert Buffer to string
    process.stdin.on("data", (chunk: Buffer) => {
    console.log("You typed:", chunk.toString());
    });

Hands-On Exercise

Challenge: Create a program that counts words from stdin and writes the count to stdout.

Starter Code:

// word-counter.ts
// TODO: Implement word counting

process.stdin.on("data", (chunk: Buffer) => {
// Your code here
});

process.stdin.on("end", () => {
// Output the final count
});
Show Solution
// word-counter.ts
// Solution: Count words from stdin

let totalWords = 0;

process.stdin.on("data", (chunk: Buffer) => {
const text = chunk.toString();
// Split by whitespace and filter empty strings
const words = text.split(/\s+/).filter((word) => word.length > 0);
totalWords += words.length;
});

process.stdin.on("end", () => {
process.stdout.write(`Total words: ${totalWords}\n`);
});

Why this works:

  1. We maintain a counter outside the event handler
  2. Each data chunk is converted to text
  3. Words are split by whitespace (\s+ matches one or more spaces/tabs/newlines)
  4. Empty strings are filtered out
  5. The final count is written to stdout when input ends

Test it:

$ echo "hello world test" | node word-counter.ts
Total words: 3

Summary: Key Takeaways

Let's recap what we've discovered about standard streams:

The Three Streams:

  • stdin (fd 0): Where data flows into your process (keyboard, files, other programs)
  • stdout (fd 1): Where normal output goes (terminal, files, other programs)
  • stderr (fd 2): Where errors and diagnostics go (separate from normal output)

In Node.js:

  • Access streams through process.stdin, process.stdout, process.stderr
  • Read from stdin using process.stdin.on('data', callback)
  • Write to stdout using process.stdout.write() or console.log()
  • Write errors to stderr using process.stderr.write()

Key Principles:

  • Every process gets these three streams automatically
  • Separating stdout and stderr allows flexible redirection
  • Streams can be piped together to build data processing pipelines
  • Programs should work both interactively and in pipelines

Essential Operators:

  • > redirects stdout to file (overwrite)
  • >> redirects stdout to file (append)
  • 2> redirects stderr to file
  • | pipes stdout to another program's stdin
  • < reads file into stdin

What's Next?

Now that you understand standard streams, you're ready to explore:

Advanced Stream Processing:

  • Node.js Streams API (Transform streams, Readable/Writable streams)
  • Backpressure handling for large data
  • Stream error handling and recovery

Related Topics:

  • Buffers in Node.js: Understanding binary data
  • Child processes: Running and communicating with other programs
  • Building CLI tools: Interactive prompts and argument parsing

Practical Applications:

  • Building command-line tools that work in pipelines
  • Processing large files without loading everything into memory
  • Creating data transformation utilities

You now have the foundation to build powerful Node.js applications that communicate effectively with users, files, and other programs. The standard stream model is elegant, universal, and incredibly powerful once you understand how it works.


Standard streams embody the Unix philosophy: simple, universal interfaces that programs use to communicate. Whether you're building CLI tools, processing data, or just displaying output, streams are the fundamental building blocks of process communication.