Philosophy

The Illusion of Technological Neutrality

Eder Souza · 1/20/2026
The Illusion of Technological Neutrality

Every system carries assumptions. Neutrality is a convenient myth.

Introduction

In the modern world, we frequently encounter arguments structured as seemingly valid syllogisms that nonetheless lead to mistaken conclusions by concealing unexamined assumptions. These arguments display an appearance of logical rigor while avoiding deeper conceptual analysis.

A recurring example is the claim that science is based on facts. From this idea, values are often attributed in ways that do not adequately describe what science actually is or how it operates. This line of thought is typically treated as self-evident and is rarely subjected to critical examination.

Following the structure of the classical Aristotelian syllogism, the reasoning may be formulated as follows:

Major premise: Everything that is based on facts is true and neutral.

Minor premise: Science is based on facts.

Conclusion: Therefore, science is true and neutral.

At first glance, the argument appears coherent. However, it proves flawed because it conceals fundamental assumptions. Facts do not speak for themselves; they are observed, selected, organized, and interpreted within specific theoretical models. Science, therefore, is not merely an accumulation of raw data, but an interpretive practice guided by previously assumed theories, methods, and criteria.

This false syllogism confuses fact with interpretation and assumes neutrality as an automatic consequence. Even when science works with empirical data, philosophical, methodological, and cultural assumptions determine what may count as a “relevant fact.” In this way, the reasoning weakens the very concept of neutrality.

Furthermore, this argument disregards the structuring role of scientific theories. As philosopher and historian of science Thomas Kuhn argued, science operates within paradigms that shape not only its theories but also what can be recognized as fact. Scientific observation is guided by prior conceptual structures. In practical terms, what counts as afactemerges within a theoretical paradigm, not from neutral observation of reality.

If science itself, at its most basic level, does not operate from raw and neutral facts but within paradigms that guide observation and interpretation, it becomes difficult to maintain that technology which arises from the practical application of this knowledge could be considered neutral.

Technology not only inherits the assumptions of the science that underlies it; it also incorporates additional layers of decision-making: efficiency criteria, success metrics, economic constraints, institutional objectives, and design choices. Every technical system is the result of a chain of accumulated interpretations and values.

The same false syllogism applied to science reappears in another form: the idea that technological systems are neutral because they are based on data, models, or facts. As with science, this argument ignores that data are selected, models are constructed, and systems are designed to fulfill specific purposes.

In this context, neutrality ceases to function as a technical observation and instead becomes a convenient narrative a way of concealing human choices beneath the appearance of technical inevitability. It is convenient because it simplifies discourse and diffuses responsibility. By asserting that a technology is neutral, attention shifts from human decisions to technical abstractions: the algorithm, the system, the tool.

Thus, technological neutrality is not a technical datum but a philosophical position often assumed without recognizing its own identity.


Systems Do Not Emerge from Nothing

No system arises spontaneously. Every piece of software is the result of a continuous chain of human decisions. Before any code exists, there is already a framing of reality. Modeling a system is always an exercise in controlled loss: choosing what will be represented, simplified, and excluded.

What is not modeled does not disappear; it simply ceases to be considered by the system. This exclusion is not an accidental error, but an inevitable consequence of technical, temporal, and conceptual limits. In short, every model presupposes a point of view.

In an early project in which I participated, while building an entity-relationship model, it became clear that the system could not encompass the full complexity of the business. Certain rules were deliberately excluded due to technical and time constraints. The experience demonstrated that modeling does not mean representing reality as it appears, but choosing which aspects of it the system will consider.

Before code comes into existence, there are always political decisions not in the partisan sense, but in the structural and organizational sense. Systems are built within organizational policies and institutional techniques that determine what can, should, and should not be done. This includes security policies, versioning standards, code quality practices, accessibility requirements, and legal compliance. Technology choices are therefore not operational details; they are normative criteria embedded into the system.

In many contemporary contexts, these policies are no longer merely documented; they are automatically enforced through pipelines, validations, and technical constraints. Policy ceases to be merely declared and becomes imposed by the system itself.

Within software architecture, this becomes even clearer. According to Clean Architecture and Domain-Driven Design, “policy” refers to the set of rules and decisions that define how a system should behave independently of technology. These are the choices that generate business value and guide how the system operates in the real world.

Even for the solitary developer, neutrality does not exist. Before opening an IDE, assumptions, biases, and criteria are already in operation. Architecture is a choice. Modeling is a choice. Prioritizing performance over explainability is a choice. Automating a human decision is also a choice.

Although technical discourse relies on seemingly objective criteria such as efficiency, scalability, and accuracy, we are still dealing with values. Efficient for whom? Accurate in relation to what? Scalable at what cost?

Technique does not eliminate the normative dimension of the world. It merely encapsulates it.


Neutrality as a Comfortable Narrative

The idea of technological neutrality does not emerge from nowhere. It inherits an older conception, widely disseminated in popular understandings of science: the idea that science begins with raw facts and produces objective, value-free knowledge.

It was precisely this conception that Thomas Kuhn challenged. By studying the history of science, he demonstrated that what a scientific community recognizes as “fact” depends on the paradigm within which it operates. The paradigm defines which problems are relevant, which methods are valid, and which results are meaningful. Scientists do not observe data neutrally; they observe them from within a prior theoretical model.

When this aspect is ignored, a simplifying reasoning emerges: if science is neutral because it deals with facts, then technology as its practical application must also be neutral. However, this conclusion depends precisely on what Kuhn showed to be false: the existence of facts independent of interpretive structures.

In the technological context, neutrality functions as an operationally convenient narrative. Systems are presented as if they merely execute what data indicate, when in practice someone has already decided which data to collect, which variables to consider, which exceptions to ignore, and which objectives to optimize.

For those working in software development, this should be obvious. Metrics do not arise spontaneously. Models do not train themselves. Business rules do not automatically emerge from data. All these dimensions are defined, refined, and prioritized by people and institutions.

When we say “the system decided” or “the algorithm just followed the data,” we create an artificial distance between outcomes and the decisions that made them possible. The system’s behavior begins to appear inevitable, almost natural. But this inevitability is constructed just as scientific paradigms construct what counts as fact.

Neutrality offers comfort because it reduces the need to justify technical choices. It transforms decisions into apparent consequences and shifts debate from the realm of values to the realm of efficiency.

This interpretation is not new. At the end of the twentieth century, philosopher of technology Langdon Winner argued that technical artifacts are not neutral instruments but structures that embody political and normative choices. Certain decisions become invisible because they are materialized in the artifact itself, operating independently of users’ conscious intentions.

When a technical system imposes behaviors, restricts possibilities, or automates decisions, it is not merely executing a function it is materializing choices made during its design. The narrative of neutrality emerges precisely when these choices cease to be perceived as decisions and begin to be treated as natural properties of technology.

This comfort has a cost: choices become invisible, priorities go undiscussed, and limitations appear intrinsic to the system.

Just as scientific paradigms shape what counts as fact, technological systems shape what counts as legitimate decision. Neutrality does not eliminate values; it conceals them beneath a technical layer.

It is from this concealment that algorithms begin to be treated as impartial judges.


Algorithms Are Not Impartial Judges

Algorithms do not judge, evaluate, or understand. They execute rules, apply weights, and maximize functions with predefined objectives. Every algorithmic decision answers, implicitly or explicitly, a simple question: what should be optimized?

The algorithm does not ask this question. It is defined before the algorithm exists.

In recommendation systems, for instance, there is no neutrality in the notion of “relevance.” Should the system prioritize dwell time, clicks, recurrence, diversity, or conversion? These criteria are external to the model. The algorithm merely orders results according to them. Changing the metric entirely changes system behavior, even if the code remains the same.

The same applies to scoring systems. A credit score does not objectively “measure risk”; it operationalizes a specific definition of risk. Which variables should weigh more heavily? Recent history or long-term behavior? Should false positives or false negatives be penalized more severely? These are not purely statistical questions. They are normative decisions formalized in mathematical logic.

Data do not function as mirrors of reality. They are records produced by prior systems under prior rules. Logs reflect existing interfaces, flows, and incentives. Historical data carry the mark of past decisions including exclusions. What was not measured simply does not exist for the model.

Training an algorithm on historical data is not starting from scratch; it is inheriting a trajectory of choices. Patterns are learned, but those patterns did not emerge naturally. They were produced within specific social, economic, and technical contexts.

Every algorithmic system optimizes something: minimizing error, maximizing engagement, reducing cost, increasing throughput. But optimizing one metric implies accepting losses in others. Precision may come at the cost of explainability. Performance may sacrifice fairness. Scalability may require aggressive simplification.

Trade-offs are not system failures; they are consequences of assumed priorities. The algorithm does not choose among them; it executes previously established choices.

Technical complexity often intensifies the illusion of impartiality. Large models, distributed pipelines, and opaque systems make it difficult to trace where decisions were made. The less visible the chain of choices, the easier it becomes to treat results as inevitable. But opacity does not produce neutrality; it shields decisions from scrutiny.

Many of these choices are disguised as requirements: “the business must maximize profit,” “the system must reduce risk,” “the model must prioritize those who convert more.” These may be legitimate goals, but they are not neutral. They are value judgments translated into code.

The problem is not that systems have criteria. Every system must have them. The problem lies in treating these criteria as natural properties of the algorithm rather than as human decisions embedded in its structure.

Algorithms are not impartial judges because they have no criteria of their own. They execute criteria defined by someone, in a given context, for specific objectives. Impartiality does not disappear because a system is complex or mathematically sophisticated; it disappears when we forget or pretend to forget who decided what should be optimized.

If algorithms merely execute decisions, then the central question ceases to be exclusively technical and becomes inevitably human: who builds, under which assumptions, and with what responsibilities?

It is at this point that the role of those who build systems ceases to be invisible.


The Role of the Builder

For those who work in software development, the myth of neutrality offers a comfortable refuge. It allows focus exclusively on implementation, treating high-impact decisions as simple technical requirements. The problem is that systems do not operate solely within code; they operate in the world.

Every technical choice carries consequences that exceed the immediate scope of the system. A prioritization rule determines who will be served first. A cutoff criterion determines who will be excluded. A pattern adopted at scale turns exceptions into norms. Even seemingly minor decisions default values, rigid limits, mandatory flows shape behavior when applied repeatedly.

This does not make developers absolute moral arbiters, nor does it require that every line of code be accompanied by explicit ethical judgment. But it does require recognition of something basic: building systems is a situated human act, performed within specific organizational, economic, and cultural contexts.

In practice, builders rarely define ultimate objectives alone. There are business pressures, deadlines, metrics, legal constraints, and technical limitations. Still, there is always room for decision: how to model, what to automate, what to make explicit, what to hide behind abstraction.

Rejecting neutrality does not mean assuming guilt for everything. It means assuming clarity about one’s craft. Engineering is not merely about solving problems, but about defining which problems deserve to be solved and how.

When this awareness is absent, technique becomes self-referential. Internal metrics justify external decisions; efficiency becomes an end in itself. The system functions, but without reflection on its broader effects.

Recognizing the role of the builder does not make technology less efficient. It makes it more honest and paradoxically more robust because explicit decisions can be questioned, revised, and corrected.

This is where technical responsibility becomes a sign of maturity.


Limit, Responsibility, and Clarity

Recognizing that technology is not neutral does not imply rejecting it or attributing intentions to it. It means restoring it to its proper limits. Technical systems do not decide for themselves; they operate within boundaries defined by models, metrics, and objectives.

The problem arises when these limits are forgotten when technique is treated as the final authority rather than a means. When human choices embedded in design are presented as inevitable consequences of technical complexity.

Responsibility does not emerge only when something goes wrong. It exists from the moment a decision is crystallized in architecture, code, or model. Every abstraction eliminates possibilities. Every automation fixes behaviors. Every privileged metric displaces others to the margin.

Assuming this responsibility does not require anticipating every consequence. That would be unrealistic. It simply means recognizing that building systems is a situated act, oriented by values even implicit ones.

At this point, the notion of limit becomes fundamental: technical limit, epistemological limit, moral limit. Technique does not resolve what has not been decided at the human level. Demanding neutrality from technology is, in practice, an abdication of reflection about these limits.

Clarity does not paralyze engineering. It prevents it from becoming blind to its own impact.

In the end, the question is not whether systems, algorithms, or models are neutral they are not. The question is whether those who build them are willing to acknowledge the choices involved, or prefer to delegate them to technical abstractions treated as inevitable.

Technology amplifies capabilities and consequences. Recognizing this does not weaken the technical craft. It signals professional maturity.

Neutrality is a convenient myth. Responsibility, assumed with limit and clarity, is what remains and it is enough.

References (ABNT)

KUHN, Thomas S. The Structure of Scientific Revolutions. 3rd ed. Chicago: University of Chicago Press, 1996.

WINNER, Langdon. Do artifacts have politics? Daedalus, Cambridge, v. 109, n. 1, p. 121–136, 1980.

SCOTT, James C. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven: Yale University Press, 1998.

LATOUR, Bruno. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge: Harvard University Press, 1987.

LATOUR, Bruno. Jamais fomos modernos. Rio de Janeiro: Editora 34, 1994.

O’NEIL, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group, 2016.

EUBANKS, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018.

GEBRU, Timnit et al. Datasheets for Datasets. Communications of the ACM, New York, v. 64, n. 12, p. 86–92, 2021.

MARTIN, Robert C. Clean Architecture: A Craftsman’s Guide to Software Structure and Design. Boston: Prentice Hall, 2017.

EVANS, Eric. Domain-Driven Design: Tackling Complexity in the Heart of Software. Boston: Addison-Wesley, 2004.

Acknowledgements to Dayane for her support in reviewing the text.