This content originally appeared on DEV Community and was authored by Aditya Pratap Bhuyan
In the grand narrative of software development, Object-Oriented Programming (OOP) is often cast as the protagonist of reusability. We’re taught that through encapsulation, inheritance, and polymorphism, we can build modular, Lego-like systems that are easy to extend and maintain. And for many decades, this story has held true. The class, the object, the interface—these are the bedrock concepts upon which vast digital empires have been built.
But this narrative, compelling as it is, is incomplete. It risks overshadowing other, equally powerful—and in some contexts, superior—paradigms for achieving the holy grail of software engineering: writing code once and using it everywhere. The idea that reusability is exclusively, or even primarily, the domain of OOP is a misconception. From the stark simplicity of the 1970s Unix command line to the mind-bending elegance of modern functional and generic programming, brilliant minds have been solving the reusability puzzle in ways that have nothing to do with new
keywords or class
hierarchies.
This article is an exploration of that hidden world. We will journey through five distinct yet interconnected philosophies that have achieved tremendous reusability without relying on traditional object-oriented principles. We’ll see how composing tiny programs, treating functions as data, writing code that is generic over types, building robust libraries, and even programming the programming language itself can lead to systems that are profoundly modular, maintainable, and reusable. Prepare to look beyond the object and discover the diverse and powerful landscape of code reuse that has been shaping our digital world all along.
1. The Unix Philosophy: Reusability Through Composition of Processes
Long before the concepts of microservices and serverless functions entered the popular lexicon, there was the Unix command line. Born in the minimalist, resource-constrained environment of Bell Labs in the early 1970s, the Unix philosophy represents one of the most successful and enduring examples of non-OOP reusability in the history of computing. Its power doesn’t come from complex abstractions or intricate type systems, but from a radical commitment to simplicity and composition.
The philosophy, as famously summarized by Doug McIlroy, one of its originators, can be distilled into a few core tenets:
- Write programs that do one thing and do it well. Each program should be a master of a single task, not a jack-of-all-trades. The
grep
command finds text. Thesort
command sorts lines. Thewc
command counts words. None of them try to do the others’ jobs. - Write programs that work together. The output of any program should be usable as the input to another, as yet unknown, program.
- Write programs to handle text streams, because that is a universal interface. By standardizing on simple, line-oriented text as the medium of communication, programs don’t need to know anything about each other’s internal logic. Text is the universal language.
This set of principles created an ecosystem of small, independent, and incredibly reusable tools. The true genius lies in the mechanism that connects them: the pipe (|
). The pipe is an operator that takes the standard output of the command on its left and “pipes” it directly into the standard input of the command on its right. This allows for the creation of complex workflows by chaining together simple, single-purpose tools.
Let’s dissect a classic example to see this reusability in action. Imagine you have a large log file, server.log
, and you want to find the top 10 most frequent IP addresses that have accessed your server.
Without the Unix philosophy, you might write a single, monolithic script in a language like Python or Perl. This script would need to:
- Open and read the
server.log
file. - Use a regular expression to extract IP addresses from each line.
- Store these IP addresses in a hash map or dictionary to count their occurrences.
- Sort the dictionary by the counts in descending order.
- Finally, print the top 10 results.
This script would be a self-contained unit. It would be reusable only in its entirety. If you later wanted to find the most common user agents instead of IP addresses, you’d have to modify the script’s internal logic, specifically the regular expression part.
Now, let’s solve the same problem using the Unix philosophy and a chain of reusable command-line tools:
grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" server.log | sort | uniq -c | sort -nr | head -n 10
This might look cryptic at first, but it’s a beautiful demonstration of modular reusability. Let’s break it down step-by-step, imagining the text stream flowing from left to right through the pipes:
-
grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" server.log
:- What it does: The
grep
command is a reusable tool for finding text that matches a pattern. The-o
flag tells it to output only the matching part of the lines, and-E
enables extended regular expressions. - Its sole job: To read
server.log
and spit out a stream of text containing only the IP addresses, each on a new line. It knows nothing about sorting, counting, or what will happen to its output. -
Output Stream:
192.168.1.1 10.0.0.5 192.168.1.1 172.16.0.88 ...
- What it does: The
-
... | sort
:- What it does: The
sort
command is a reusable tool for sorting lines of text alphabetically and numerically. It takes the stream of IP addresses fromgrep
as its input. - Its sole job: To arrange the incoming lines in order, which is a necessary prerequisite for the next step. It doesn’t know where the IPs came from or why they need to be sorted.
-
Output Stream:
10.0.0.5 172.16.0.88 192.168.1.1 192.168.1.1 ...
- What it does: The
-
... | uniq -c
:- What it does: The
uniq
command is a reusable tool that, by default, filters out adjacent duplicate lines. The-c
flag modifies its behavior to count adjacent duplicates and prefix each line with its count. - Its sole job: To count consecutive identical lines. This is why the
sort
step was crucial.uniq
is simple; it doesn’t keep a global memory of all lines seen, only the previous one. -
Output Stream:
1 10.0.0.5 1 172.16.0.88 2 192.168.1.1 ...
- What it does: The
-
... | sort -nr
:- What it does: We use our reusable
sort
tool again! This time, with flags.-n
tells it to sort numerically (so “10” is treated as greater than “2”), and-r
tells it to sort in reverse (descending) order. - Its sole job: To take the counted lines and order them from most frequent to least frequent. It’s the same tool as before, reused in a different context with different options.
-
Output Stream:
543 8.8.8.8 321 1.1.1.1 ... 2 192.168.1.1
- What it does: We use our reusable
-
... | head -n 10
:- What it does: The
head
command is a reusable tool for showing the first N lines of its input. The-n 10
flag specifies that we only want the top 10. - Its sole job: To truncate the stream after the tenth line.
- Final Output: The top 10 most frequent IP addresses and their counts.
- What it does: The
Each component in this pipeline is completely decoupled. grep
doesn’t need to be updated if a better sorting algorithm is implemented in sort
. uniq
can be used in countless other pipelines that have nothing to do with IP addresses. This is reusability at the process level. The modern concept of microservices, where small, independent services communicate over a universal protocol like HTTP/JSON, is the direct philosophical descendant of this 50-year-old idea.
2. Functional Programming: Reusability Through Higher-Order Functions
Functional Programming (FP) offers a radically different, yet equally potent, model for reusability. Instead of encapsulating data and behavior together in objects, FP emphasizes the separation of data from the functions that operate on it. Its reusability stems from treating functions not just as procedures to be called, but as first-class citizens. This means functions can be stored in variables, passed as arguments to other functions, and returned as the result of other functions.
The key mechanism for reusability in this paradigm is the Higher-Order Function (HOF). A HOF is simply a function that takes another function as an argument or returns a function. This allows us to abstract and reuse patterns of computation rather than just concrete values or objects.
Let’s explore this with a practical example using JavaScript, a language that has beautifully integrated functional concepts. Imagine you have a list of products, and you need to perform several different operations on it:
- Get a list of all the product names.
- Find all products that are on sale.
- Calculate the total value of all products in stock.
A traditional, imperative (non-functional) approach might look like this:
const products = [
{ name: 'Laptop', price: 1200, onSale: false, stock: 15 },
{ name: 'Mouse', price: 25, onSale: true, stock: 120 },
{ name: 'Keyboard', price: 75, onSale: true, stock: 65 },
{ name: 'Monitor', price: 300, onSale: false, stock: 30 }
];
// Operation 1: Get product names
const productNames = [];
for (let i = 0; i < products.length; i++) {
productNames.push(products[i].name);
}
// Operation 2: Find products on sale
const saleProducts = [];
for (let i = 0; i < products.length; i++) {
if (products[i].onSale) {
saleProducts.push(products[i]);
}
}
// Operation 3: Calculate total stock value
let totalValue = 0;
for (let i = 0; i < products.length; i++) {
totalValue += products[i].price * products[i].stock;
}
Notice the repetition. In each case, we are writing a for
loop. The core structure—iterating over the products
array—is repeated three times. The only thing that changes is the action we perform inside the loop. This is a prime candidate for abstraction.
Functional programming provides highly reusable HOFs to eliminate this boilerplate. The three most common are map
, filter
, and reduce
.
-
map
: Creates a new array by applying a given function to every element of the original array. It abstracts the pattern of “transforming each element.” -
filter
: Creates a new array containing only the elements that pass a test (a function that returnstrue
orfalse
). It abstracts the pattern of “selecting a subset of elements.” -
reduce
: Executes a function on each element of the array, resulting in a single output value. It abstracts the pattern of “accumulating a result.”
Let’s refactor our code using these reusable HOFs:
// Operation 1: Get product names (using map)
const getName = (product) => product.name;
const productNamesFP = products.map(getName);
// Operation 2: Find products on sale (using filter)
const isOnSale = (product) => product.onSale;
const saleProductsFP = products.filter(isOnSale);
// Operation 3: Calculate total stock value (using reduce)
const accumulateValue = (accumulator, product) => accumulator + (product.price * product.stock);
const totalValueFP = products.reduce(accumulateValue, 0);
This is profoundly more reusable. The logic for iteration (for
loops) is now encapsulated within the map
, filter
, and reduce
functions. These functions are part of the language’s standard library and can be used on any array, not just our array of products.
Our application-specific logic is now contained in small, pure, and highly reusable functions like getName
and isOnSale
. We separated the “what” (our business logic, e.g., getName
) from the “how” (the iteration, handled by map
). If we need to get the prices of all products, we don’t need a new loop; we simply write a new small function and pass it to our reusable map
function:
const getPrice = (product) => product.price;
const productPrices = products.map(getPrice);
This is reusability of behavior. The HOFs (map
, filter
, reduce
) are generic, reusable algorithms. The small functions we pass to them (getName
, isOnSale
) are specific, reusable pieces of business logic. By combining them, we build complex operations from small, understandable, and testable parts.
3. Generic Programming: Reusability Through Parametric Polymorphism
Generic Programming is a paradigm that allows us to write functions and data structures where some of the types are left unspecified, to be filled in later. This is not the same as dynamic typing; it’s a compile-time mechanism that produces code that is both highly reusable and type-safe. It’s often called parametric polymorphism, in contrast to the subtype polymorphism (inheritance) found in OOP.
Instead of writing a function that works for a specific Dog
class and can be reused for a Poodle
subclass, you write a function that works for any type T
as long as T
satisfies a specific set of requirements or behaviors, known as a contract.
Modern languages like Rust, Swift, and Haskell have made this a cornerstone of their design, but the concept has roots in languages like C++ (with its template system). Let’s use Rust to explore this, as its “trait” system provides a very clear and explicit way of defining these behavioral contracts.
Imagine you need to write a function that finds the largest item in a slice of items. Without generics, you would have to write a separate function for each type:
// A function to find the largest i32 (32-bit integer)
fn largest_i32(list: &[i32]) -> &i32 {
let mut largest = &list[0];
for item in list {
if item > largest {
largest = item;
}
}
largest
}
// A function to find the largest char
fn largest_char(list: &[char]) -> &char {
let mut largest = &list[0];
for item in list {
if item > largest {
largest = item;
}
}
largest
}
The code is identical. The only difference is the type (i32
vs. char
). This is a massive violation of the Don’t Repeat Yourself (DRY) principle.
Generic programming solves this beautifully. We can write a single, generic function that abstracts over the type.
use std::cmp::PartialOrd;
// A generic function to find the largest item of any type T
fn largest<T: PartialOrd>(list: &[T]) -> &T {
let mut largest = &list[0];
for item in list {
// This line will only compile if type T can be compared with '>'
if item > largest {
largest = item;
}
}
largest
}
Let’s break down the magic in the function signature fn largest<T: PartialOrd>(list: &[T]) -> &T
:
-
<T>
: This declares a generic type parameter namedT
. It’s a placeholder for some concrete type. -
list: &[T]
: This meanslist
is a slice of whatever typeT
turns out to be. -
-> &T
: This means the function will return a reference to a value of typeT
. -
: PartialOrd
: This is the crucial part. It’s the trait bound, or the contract. It says, “You can use any typeT
for this function, as long asT
implements thePartialOrd
trait.” ThePartialOrd
trait is what provides the ability to compare values using operators like>
and<
.
Now, we have one function that is completely reusable for any type that can be ordered.
fn main() {
let numbers = vec![34, 50, 25, 100, 65];
let result = largest(&numbers); // Works! T is i32, which implements PartialOrd.
println!("The largest number is {}", result);
let chars = vec!['y', 'm', 'a', 'q'];
let result = largest(&chars); // Works! T is char, which implements PartialOrd.
println!("The largest char is {}", result);
}
If we try to use it with a type that doesn’t make sense to compare, the compiler will protect us:
struct Point { x: i32, y: i32 }
let points = vec![Point { x: 1, y: 1 }, Point { x: 2, y: 2 }];
let result = largest(&points); // COMPILE ERROR!
// The error message would be: `Point` does not implement `std::cmp::PartialOrd`
The compiler correctly tells us that it doesn’t know how to compare two Point
structs. To make it work, we would explicitly define how Point
s should be ordered by implementing the PartialOrd
trait for our Point
struct.
This approach gives us the best of all worlds:
- Reusability: We write the
largest
logic once, and it works for an infinite number of types. - Type Safety: The compiler guarantees at compile time that the function will only be called with types that meet the contract. There are no runtime errors.
- Performance: Through a process called monomorphization, the compiler generates specialized, optimized versions of the generic function for each concrete type used at compile time. So, behind the scenes, it produces something like
largest_i32
andlargest_char
, giving us zero-cost abstractions.
This is a profoundly powerful way to build reusable and robust libraries and APIs.
4. Procedural Programming: Reusability Through Libraries
This might seem almost too simple to include, but the humble library is arguably the most prolific and successful mechanism for code reuse in history, and its roots lie firmly in the non-OOP world of procedural programming. Languages like C, Fortran, and Pascal powered the digital revolution by packaging reusable code into libraries.
In procedural programming, the fundamental unit of organization is the function (or procedure). Reusability is achieved by grouping related functions together into a compilation unit, exposing a public interface through a header file, and distributing the compiled implementation as a shared (.so
, .dll
) or static (.a
, .lib
) library.
Let’s consider the C language. C itself is remarkably small. What makes it powerful is the vast ecosystem of libraries built around it, starting with the C Standard Library. Think about the printf
function. No C programmer ever writes the complex logic for parsing format strings and converting binary data into characters for the console; they simply #include <stdio.h>
and call printf
. This is foundational reusability.
But it goes much deeper. Let’s take a more complex example: libcurl
. libcurl
is a free, open-source client-side URL transfer library. It supports protocols like HTTP, HTTPS, FTP, and dozens more. When a developer needs to make an HTTP request in their C or C++ application, they don’t start writing socket code, parsing HTTP headers, or handling TLS handshakes. They link against libcurl
.
The mechanism works like this:
-
The API Contract (Header File):
libcurl
provides a header file, typicallycurl/curl.h
. This file contains the function prototypes, type definitions, and constants that make up the library’s public API. It’s the contract that tells the consumer how to use the library. It might contain function declarations like:
CURL *curl_easy_init(void); CURLcode curl_easy_setopt(CURL *curl, CURLoption option, ...); CURLcode curl_easy_perform(CURL *curl); void curl_easy_cleanup(CURL *curl);
This header file makes no mention of how these functions are implemented. It’s a pure interface.
The Implementation (Compiled Library): The
libcurl
developers have written hundreds of thousands of lines of C code to implement all the complex logic for networking. This code is compiled into a binary file (e.g.,libcurl.so
on Linux orlibcurl.dll
on Windows). This binary contains the machine code that actually does the work.Usage (Linking): The consumer of the library writes their own application. They include the
curl.h
header file so the compiler knows that functions likecurl_easy_init
exist. When they compile their code, they tell the linker to link their application with thelibcurl.so
library. The linker’s job is to resolve the function calls in the application code and connect them to the actual implementations inside the compiled library binary.
This model provides a powerful form of binary reusability and encapsulation without any need for objects. The internal state of libcurl
is managed via an opaque pointer (CURL *
), a common C pattern for hiding implementation details. The user can manipulate this state only through the public functions provided in the header. They cannot, and do not need to, know how libcurl
works internally.
This approach has several profound benefits:
- Language Interoperability: Because the library is a compiled binary with a C-style function interface, it can be called from almost any other programming language. Python, Ruby, Node.js, C#, and Rust can all use a Foreign Function Interface (FFI) to call functions in a C library like
libcurl
. This makes C libraries a lingua franca for reusable components. - Stable APIs: A library can maintain Of course the function signatures in its header files—while the internal implementation can be completely overhauled. This is a critical feature for long-term software maintenance. The developers of the library are free to fix bugs, optimize performance, or even switch out entire underlying dependencies. As long as they don’t change the public-facing function signatures, consumer applications don’t need to be rewritten. They simply need to be relinked against the new version of the library to gain the benefits of the internal improvements.
For instance, the team behind a popular image processing library could replace their slow, custom-written JPEG decoding algorithm with the much faster, industry-standard libjpeg-turbo
. From the perspective of an application developer using the library, nothing has changed. Their call to load_image_from_file("photo.jpg")
looks exactly the same. But when they link their application to the new version of the library, their program suddenly runs faster. This powerful decoupling between interface and implementation is a form of encapsulation, achieved not through private
keywords and classes, but through the hard boundary of a compiled binary.
This procedural library model, while old, is far from obsolete. It forms the bedrock of nearly every operating system. It’s how device drivers expose functionality, how graphics APIs like OpenGL are specified, and how countless high-performance scientific computing and systems programming tasks are accomplished. It is a battle-tested, language-agnostic, and profoundly effective strategy for code reuse.
5. Metaprogramming: Reusability by Programming the Language Itself
Our final destination on this journey is perhaps the most mind-bending and abstract, yet it offers the ultimate form of reusability: metaprogramming. If the previous paradigms were about reusing components within a language, metaprogramming is about reusing patterns to extend the language itself. It is, in short, code that writes code.
This isn’t about simple text substitution, like the C preprocessor’s #define
directive, which is notoriously error-prone. True metaprogramming, found in languages like Lisp, Elixir, Rust, and Nim, operates on the structure of the code itself, typically on its Abstract Syntax Tree (AST). The mechanism for this is the macro.
A macro is a special kind of function that runs at compile time. Unlike a regular function, which operates on data at runtime, a macro receives fragments of code as its input and produces new fragments of code as its output. This new code is then seamlessly inserted into the program before the final compilation step. This allows a programmer to eliminate boilerplate and create new, expressive syntactic constructs that are perfectly tailored to their problem domain. You are essentially designing and reusing new pieces of your programming language.
Let’s explore this with a classic and practical example: safe resource management. In many programming languages, when you work with an external resource like a file or a network connection, you must follow a specific pattern to ensure correctness:
- Open the resource.
- Perform your operations within a
try
block. - If an error occurs, catch it and handle it.
- Crucially, in a
finally
block, ensure the resource is closed, regardless of whether an error occurred.
Writing this out manually every time you need to read a file is tedious and, more importantly, easy to get wrong. You might forget the finally
block, leading to resource leaks.
Here’s how that boilerplate might look in a hypothetical language:
// Reading file A
let fileA = open_file("/path/to/a.txt");
try {
// do work with fileA...
print(read_line(fileA));
} catch (error) {
log_error(error);
} finally {
close_file(fileA);
}
// Reading file B
let fileB = open_file("/path/to/b.txt");
try {
// do different work with fileB...
process_data(read_all(fileB));
} catch (error) {
log_error(error);
} finally {
close_file(fileB);
}
The structure is identical in both cases. The only parts that change are the filename and the block of code inside the try
statement. This recurring code pattern is a perfect candidate for abstraction via a macro.
Let’s imagine we’re in a Lisp-like language that supports powerful macros. We could write a macro called with-open-file
to encapsulate this entire pattern.
(defmacro with-open-file ((var file-path) &body body)
; This is the macro definition. `var` and `file-path` are inputs.
; `body` captures all the code that the user provides inside the macro call.
; The backquote ` means we're creating a template for code.
`(let ((,var (open_file ,file-path)))
(try
,@body ; The ,@ "splices" the user's code block right here.
(catch (error)
(log_error error))
(finally
(close_file ,var)))))
This might look intimidating, but the concept is straightforward. We’ve defined a template. When the compiler sees with-open-file
, it will execute this macro. The macro takes the pieces of code it was given (the variable name, the file path, and the body of code) and programmatically arranges them into the full try...catch...finally
structure.
Now, a programmer can reuse this safe pattern with beautiful simplicity:
(with-open-file (fileA "/path/to/a.txt")
; do work with fileA...
(print (read_line fileA)))
(with-open-file (fileB "/path/to/b.txt")
; do different work with fileB...
(process_data (read_all fileB)))
This code is not only shorter and cleaner, but it’s also fundamentally safer. The programmer cannot forget to close the file because the close_file
logic is automatically generated by the macro every single time. We haven’t just reused a function; we’ve created a new, reusable, and safe control structure in our language.
This technique is used extensively in modern non-OOP ecosystems. The Phoenix web framework for Elixir, for example, uses macros to create Domain-Specific Languages (DSLs) for routing, database definitions, and HTML templating. When you write a router in Phoenix, you use clean keywords like get
, post
, and pipe_through
. These look like built-in parts of the language, but they are actually macros that expand at compile time into highly optimized, complex code for handling web requests. This allows developers to express their intent clearly and concisely, while the reusable macros handle the messy implementation details.
Metaprogramming is the pinnacle of abstraction. It allows us to identify and eliminate systemic boilerplate, enforce complex invariants at compile time, and build expressive DSLs that make our codebases easier to read, write, and reason about. It is reusability not of components, but of patterns of code generation.
Conclusion: A World Beyond Objects
Our exploration has taken us from the gritty, process-oriented world of the Unix shell to the abstract, compile-time transformations of metaprogramming. Along the way, we’ve seen how functional programming reuses behavioral patterns with higher-order functions, how generic programming reuses algorithms in a type-safe way, and how procedural programming has built the foundation of modern software with linkable libraries.
What does this all mean? It means that reusability is a universal principle of good software design, not a feature exclusive to a single paradigm. Object-Oriented Programming provides a powerful and well-understood set of tools—classes, inheritance, interfaces—for achieving this goal, and its success is undeniable. But it is one set of tools among many.
The truly effective software architect is not a zealot for a single paradigm but a polyglot who understands the strengths and weaknesses of multiple approaches. They recognize that:
- For gluing together data-processing scripts and system utilities, the Unix philosophy of small, composable tools is often unmatched in its power and simplicity.
- For data transformation pipelines, user interface event handling, or any situation involving a series of computational steps, the functional approach with its reusable higher-order functions leads to cleaner, more predictable code.
- For writing fundamental data structures, algorithms, or any component that needs to work with a variety of data types without sacrificing performance or safety, generic programming is the indispensable tool.
- For creating stable, language-agnostic, high-performance components that form the foundation of an ecosystem, the procedural library model remains as relevant today as it was fifty years ago.
- And for eliminating deep, systemic boilerplate and creating expressive, domain-specific languages, metaprogramming offers an unparalleled level of abstraction.
The goal is not to abandon OOP but to enrich our perspective. By understanding and embracing these diverse and powerful non-OOP paradigms for reusability, we expand our problem-solving toolkit. We learn to see patterns of reuse not just in the relationships between objects, but in the composition of processes, the abstraction of behavior, the parameterization of types, and the very structure of our code. We become more versatile, more creative, and ultimately, better engineers, capable of choosing the right tool for the job and building software that is more robust, maintainable, and truly elegant.
This content originally appeared on DEV Community and was authored by Aditya Pratap Bhuyan