Applications, middleware, operating systems (OS), device drivers, and other software layers work together to drive hardware. Today these layers are getting deeper, so it is harder to see how software actually operates the lowest-level hardware.
To get a software patent, you must describe clearly that “the software’s information processing is concretely realized using hardware resources,” and that “software and hardware resources cooperate to build a particular information-processing device or its operating method for a given purpose” (Japanese patent examination standards).
Below we outline how that “cooperation” is realized.

Stage 1: No software
Although most modern hardware relies on software to function, there are still plenty of machines that can work entirely without it.
A mechanical watch moves its hands by transferring the spring’s torque through gears. The mainspring and gear set (the movement) form a control system.
If you press the “high” switch on a fan, an electronic circuit increases the motor’s rotation speed. The logic is simple: an electronic circuit implements rules like “if switch X is pressed, do Y.”
In these cases, mechanics or electronic circuits alone provide the control. Software (higher layers) was created to let electronic circuits perform more complex and flexible work.
Stage 2: Machine code
A stored-program computer is called a von Neumann computer.
A von Neumann computer repeatedly performs a simple cycle: (1) fetch an instruction from memory, (2) execute it, and (3) select the next instruction. The device that carries out these instructions is the processor, such as a CPU.
If instructions are stored in memory, the processor follows the instruction sheet automatically. Changing the instruction sheet makes the processor do different work.
A processor contains many electronic circuits, and circuits contain many transistors. A transistor is a switch controlled by current. By turning many switches on and off in patterns, we give commands to the processor.
For example, one processor might treat the bit pattern “0000” as “read from memory,” “0010” as “add,” and “0011” as “subtract.”
When the processor receives the instruction “0000 0100,” it executes the “0000” (read) operation on the data stored at memory address 0100.
These binary instructions are called machine code (machine language).
If a human gives machine code to a processor, the processor does the intended operations. A program is a sequence of such instructions loaded into memory so the processor can perform complex tasks.
Machine code is the processor’s native language.
The punched cards once used in early computers served as instruction sheets written in machine code. People who worked during the punched-card era said they could almost tell what instructions were on a card just by looking at it.
Machine code is the simplest, most primitive form of “software.”
Stage 3: Assembly language
Machine code is hard for humans to read.
There is also a compatibility problem: processor A’s “read” might be 0000, while processor B’s “read” might be 0101. Because processors differ in circuit design, their machine codes differ.
Assembly language solves this difficulty.
In assembly, the read instruction might be written as LD (for Load). An assembler program translates assembly into machine code. The assembler for processor A turns LD into 0000; the assembler for processor B turns LD into 0101.
Assembly language is easier to read than machine code. The trade-off is that it requires an extra step — translation by an assembler.
The assembly code is converted into machine code, and the processor operates according to that machine code.
Machine code and assembly are called low-level languages.
Stage 4: High-level languages
High-level languages appeared as a more understandable alternative to assembly language. There are many kinds of high-level languages — C, C++, Java, BASIC, and countless others. It is said that there are at least 200 different high-level languages.
A compiler is a program that converts a high-level language to machine code.
One high-level statement can map to many machine instructions. For example, the C function “strcmp” compares strings; the C compiler chooses several machine instructions to implement it.
Unlike an assembler, a compiler does not perform a simple word-for-word translation. Instead, it carries out a more advanced task — selecting the appropriate machine instructions to implement the commands written in a high-level language.
Because of this, the performance of the same application can vary depending on the compiler used. The world of compilers is truly deep and fascinating.
Incidentally, figuring out the original source code written in a high-level language from the machine instructions (object code) is called reverse engineering.
Thanks to high-level languages and compilers, programming became much easier. Some high-level languages (like C) allow fine control of hardware; others (like BASIC) are easier to learn but less detailed. Engineers pick languages to match application needs.
Stage 5: Operating system (OS)
Sometimes we want multiple software programs to run at once—for example, editing a document while checking email.
To run multiple software programs in parallel, the system must decide at each moment which program should use the processor.
For example, while you are running a word processor, if the arrival of an email is detected, the system needs to briefly activate the mailer’s notification function.
An OS (Operating System) is the privileged software that manages and schedules multiple programs. The OS itself is written in high-level language.
As operating systems grew more capable, programs that ran on them—called “applications”—no longer gave direct instructions to the hardware like the CPU. Instead, they started asking the OS to handle those operations on their behalf.
Once the OS started offering menus of functions—called APIs—for working with hardware, applications began to control the hardware indirectly through them.
Applications still have to follow the OS’s rules, but they can leave the low-level details to the OS.
The OS made it easier for developers to concentrate on creating better and higher-quality applications.
However, since applications can be built without knowing much about what goes on inside the OS, it has become less clear how software and hardware really cooperate.
As a result, software evolved into two main layers:
the application layer, which handles specific tasks, and the operating system layer, which quietly connects and manages both the applications and the hardware underneath.
Operating systems such as Windows provide a wide range of powerful features that make it easier to develop advanced applications.
The OS can truly be considered the essence of modern software technology.
Software terms
Middleware
Software that acts as a bridge between applications and the operating system is called middleware.As applications have become more complex and technically demanding, developers began to want certain functions to be handled by other software components.
Middleware emerged to meet this need.
In this structure, the application sends instructions to the middleware, the middleware communicates with the OS, and the OS ultimately controls the hardware.
With the introduction of middleware, software evolved into a three-layer architecture.
Object-oriented programming
There are several different "schools of thought" when it comes to how programs are designed, and object-oriented programming (OOP) is one of them.In OOP, a program is built around units called objects (or classes), and processing is achieved through the interaction between these objects.
It’s a brilliant and logical concept, yet one that can be difficult to truly master.
Object-oriented programming has fundamentally changed the way software is developed.
Cloud and Big Data
With better networks, storing data on remote servers and offloading some processing to servers became common. Cloud and big data are developments from this idea.
Artificial Intelligence (AI)
Today, artificial intelligence (AI) can still be considered one kind of application.Its distinctive feature is that it learns from massive datasets and updates its own decision rules over time.
Because of this, even when AI achieves remarkable results, we usually can’t tell exactly why it made those choices.
The mechanisms behind artificial intelligence are essentially combinations of existing mathematical methods—such as neural networks and principal component analysis—rather than something entirely revolutionary.
It became practical mainly because of advances in computing power.
Some experts even argue that today’s dominant AI paradigm may not remain the standard in the future, making this a field whose direction of evolution is still uncertain.
Quantum computers
A quantum computer is based on principles entirely different from those of a conventional von Neumann computer.It possesses extraordinary computational power and was once considered purely theoretical.
However, parts of it have now been realized in practice, and it remains unclear how this technology will continue to evolve.
Software patents
When filing a software patent, it’s important to explain how the software actually cooperates with the hardware.
Since the appearance of operating systems and high-level programming languages, the inner workings of machine code have become harder to see — and in the future, applications and hardware may drift even further apart.
Some machines can function without software, but software itself cannot exist without hardware to run on.
That’s why, when drafting a software patent, it’s important to imagine what kind of hardware your software actually works on — doing so helps satisfy the patent requirements and makes it harder for competitors to work around your invention.