Setun, the Soviet ternary computer, was more than an experiment—it was a glimpse into a lost computing paradigm. Abandoned by capitalism, its story reveals how profit stifles scientific progress.
Introduction
A conversation with a friend about an esoteric concept brought an old topic back to the surface of my mind, the way things tend to with my autistic gestalt processor brain. One moment, I was deep in discussion about the structure of reality, the next, my mind had made a rapid-fire connection to an obscure Soviet computer from 1958. It wasn’t just any machine, but the Setun (Сетунь), the world’s first balanced ternary computer, an invention that stood apart from the binary-driven computing paradigm that would come to dominate. And as the pieces clicked together in my head, I realised that Setun might have been more than just a forgotten experiment in alternative computation—it could very well be the missing bridge in materialising an entirely different way of thinking about numbers, reality, and computation itself.
Setun was a machine designed in the early days of the Cold War, built not in the capitalist West but under the constraints of a Soviet system that, despite its economic limitations, was willing to explore scientific frontiers purely for the sake of knowledge. Unlike the binary computers of the West, which operated on a rigid system of ones and zeroes, Setun functioned on a balanced ternary system—a logic that incorporated negative one, zero, and positive one (-1, 0, +1). This wasn’t just an arbitrary design choice; ternary logic offered a mathematically more efficient structure, using fewer components to perform the same calculations as a binary system. It had the potential to be more powerful, elegant, and even closer to the natural principles that govern reality. And yet, despite its advantages, Setun was discarded—not because it failed, but because the global technological trajectory had already been locked into binary.
This is where my mind began spiralling outward. If numerical frameworks are not just human inventions but reflections of deeper structures in reality, then could Setun have been an unintentional rediscovery of something ancient? Something fundamental? Across different cultures and traditions, certain numerical principles—especially those linked to base-9 mathematics—have been seen as holding cosmic significance. What if balanced ternary wasn’t just an experimental computing model but a forgotten key to a more natural way of encoding information? What if the lost potential of Setun wasn’t just about computing, but about a broader, systemic failure to recognise an entire paradigm that could have changed how we approach numbers, consciousness, and even the nature of intelligence itself?
This led me to a bigger, more uncomfortable question: how much knowledge has been lost because of profit-driven research? The Soviet Union, for all its faults, did not operate under the same constraints as the capitalist West when it came to scientific inquiry. It had the freedom to explore ideas that were impractical, unprofitable, and wildly speculative, simply because the state saw value in knowledge itself. The West, by contrast, let market forces dictate what was worth pursuing. And when binary computing became standardised—because it was already deeply embedded in corporate and military infrastructures—alternatives like Setun were dismissed, discarded, and forgotten.
Now, decades later, as we grapple with the limitations of binary systems—especially in areas like artificial intelligence, quantum computing, and neural networks—we find ourselves circling back to the very concepts Setun embodied. But how much time have we lost? And how many other ideas, perfectly viable yet commercially inconvenient, have we let slip away?
The Economic War Against Soviet Innovation
I’ve always had an insatiable curiosity, a trait that has shaped my entire life. It’s the kind of curiosity that doesn’t just accept things at face value but compels me to dig, to question, to trace the patterns that others overlook. Growing up during the Cold War, I remember the endless anti-Soviet rhetoric—the same predictable lines about how the USSR was a brutal, backwards dictatorship, how communism stifled innovation, how capitalism was the only path to progress. And yet, even as a child, something about this never sat right with me. If the Soviet Union was so backwards and wrong, how did they get to space first? How had they managed to launch Sputnik before the United States, forcing the West into a panicked scramble to catch up?
The more I looked, the more firsts I found. Not only did the Soviets put the first satellite in orbit, but they sent the first human into space—Yuri Gagarin. And then, the first woman in space—Valentina Tereshkova, at a time when women in the West were still fighting to be taken seriously in the workforce. The more I dug, the more it became clear that the Soviets were far from the backward fools they were made out to be.
This realisation expanded beyond science and space. I stumbled upon stories of Soviet snipers in World War II, particularly Lyudmila Pavlichenko, who had over 300 confirmed kills. A woman. A soldier feared by the Nazis. And then, another revelation—the Soviet Union was the country that actually won the war. In the Western narrative, it was always the heroic Americans storming the beaches of Normandy that turned the tide, but the numbers told a different story. The Soviets lost over 27 million people in the fight against fascism. They were the ones who broke the back of the Nazi war machine, fighting through the brutal sieges of Stalingrad and Leningrad, pushing the Germans back to Berlin long before the Americans ever set foot on European soil. The U.S., in comparison, had suffered relatively few losses and joined the European front only after the Soviets had already done the hard work of bleeding the Nazis dry.
That was when I started to unravel a much bigger story—one that wasn’t in the history books we were given, but available through my Marxist grandmother. The more I inquired, the more I realised that the Soviet Union’s struggles weren’t just the result of its internal politics; they were by design, the consequence of a deliberate, systematic economic war waged by the West. The Bretton Woods System, which established the post-war financial order, was built to entrench U.S. economic dominance, ensuring that Western Europe and its allies remained firmly within a capitalist global structure. The Marshall Plan, whilst presented as a generous effort to rebuild Europe, was in reality a strategic move to block Soviet influence and tie Western economies to American capital.
And beyond that? The U.S. didn’t just oppose socialism—it crushed it wherever it emerged. Any country that attempted to pursue an independent economic path outside of U.S. control was subjected to sanctions, embargoes, coups, and invasions. The Soviet Union itself was economically isolated, forced to develop its own technologies because Western supply chains were entirely closed to them. They weren’t allowed access to advanced Western semiconductors, transistors, or microchips—things that the U.S. and its allies took for granted (think DeepSeek as China’s response to economic embargo). They had to build their computing infrastructure from scratch.
And yet, despite this near-total blockade, Soviet scientists didn’t just keep up; in some areas, they pulled ahead. This was when I began to see a fundamental difference in how research was approached under capitalism versus socialism. The West was “market-driven”—scientific progress had to be profitable, or it wouldn’t be funded. The Soviet Union, by contrast, took a state-driven approach—if an idea had potential, if it could contribute to human knowledge, it was worth exploring. There was no need to prove its market viability before securing funding. This is how the USSR could fund projects that the West wouldn’t even consider—things like balanced ternary computing, massive-scale geo-engineering, or deep space exploration missions with no immediate commercial return.
This wasn’t a story of failure. It was a story of resilience—of a nation forced to innovate under economic siege, pushing forward despite the world’s most powerful empire trying to erase any alternative to capitalist hegemony. And as I learned more, I saw that this wasn’t just about the Soviet Union. The same pattern repeated everywhere: Chile in 1973, Indonesia in 1965, Iran in 1953, Guatemala in 1954, Congo in 1961—anywhere socialism or independent economic models emerged, they were brutally dismantled.
The more I understood this, the more I realised that the narrative I had been fed growing up was not about truth—it was about control. The goal was never to provide an honest account of history but to ensure that no one ever questioned capitalism’s dominance. The economic war against the USSR wasn’t just about defeating a rival superpower; it was about making sure that no one ever saw an alternative as viable. And that realisation changed everything.
Setun: The Revolutionary Ternary Computer
The more I delved into the story of Soviet innovation, the more I kept circling back to Setun, an obscure yet fascinating computer built in 1958 at Moscow State University by Nikolay Brusentsov and Sergei Sobolev. Whilst the world had already settled on binary—ones and zeroes, on and off, true and false—Setun operated on an entirely different logic: balanced ternary (-1, 0, +1). It was unlike anything in Western computing, not because the Soviets didn’t understand binary, but because they saw an opportunity to develop something more elegant, efficient, and mathematically harmonious.
At first, I saw Setun as a mere curiosity—an experiment in an alternative computing model that never took off. But the deeper I went, the more I realised that ternary computing wasn’t just a technological detour—it might have been a lost pathway to a more natural way of structuring information. The power of base-3 mathematics, which Setun embodied, connects to something far older than modern computing. It has echoes in ancient numerical systems, natural efficiencies, and even the fundamental structures that govern reality.
The Hidden Power of Base-3
Most of us are trained to think in base-10 (decimal), simply because we have ten fingers. But if we step back and look at numerical systems across cultures, we see that base-10 wasn’t the only, or even the best, choice. Many of the most enduring and effective numerical frameworks across history have been multiples of base-3, including base-9 and base-60, which governed entire civilisations.
Take base-9, for instance. In Vedic mathematics, Egyptian numerology, and even esoteric Western traditions, the number 9 has long been considered a number of completion and recursion. It’s the foundation of magic squares, sacred geometry, and self-replicating numerical cycles—the very patterns that appear in fractals and organic systems. Base-9 has a unique property where any multiple of 9 always reduces back to 9 when summed:
9 × 1 = 9
9 × 2 = 18 (1 + 8 = 9)
9 × 3 = 27 (2 + 7 = 9)
This self-referential property is something binary doesn’t have—it is a system that folds back into itself, suggesting an inherent efficiency and symmetry. If base-9 and its ternary underpinnings naturally lend themselves to self-organising systems, could it be that balanced ternary computing was, in fact, a more natural model for structuring data and problem-solving?
And then there’s base-60, used by the Sumerians and Babylonians. This system, which is still embedded in our measurement of time (60 seconds, 60 minutes, 360 degrees), is a harmonic multiple of base-3 and base-9. The endurance of base-60 suggests that our oldest civilisations might have understood something about numerical efficiency that modern computing ignores. A ternary computing system could have seamlessly extended into a base-9 or base-60 framework, unlocking entirely different ways of processing information.
Setun’s Mathematical Advantage
Unlike binary, which forces every decision into a hard yes or no, balanced ternary introduces a neutral state (0), allowing computations to be more fluid, adaptive, and energy-efficient. In many ways, this mirrors natural systems, which don’t operate on binary logic but instead involve gradients, oscillations, and balance points.
From an engineering perspective, Setun’s ternary logic required 30% fewer logical elements to perform the same operations as a binary computer. That’s a staggering level of efficiency, especially considering the limitations Soviet engineers were working under. It wasn’t just about using fewer transistors—it was about a completely different way of thinking about computation itself.
This is why ternary computing could have been groundbreaking for fields like artificial intelligence, neural networks, and quantum computing—all of which naturally involve multi-state logic rather than hard binary switches. The more we move toward complex problem-solving in AI and quantum mechanics, the more we find ourselves trying to force binary computers to simulate things they weren’t designed for.
But what if we had gone another way? What if, instead of defaulting to binary computing because it was industrially convenient, we had developed computing systems that aligned more closely with organic intelligence, natural recursion, and ancient numerical principles?
The Road Not Taken
Setun was discontinued not because it failed but because “the world” (aka, the United States and its corporations) had already chosen a technological standard based on economic and industrial inertia. The capitalist world had settled on binary computing because it was simpler to manufacture and already entrenched in corporate and military infrastructures. Once that happened, there was no going back—software, chip architecture, and entire industries had been built around binary, making any alternative commercially unviable.
But that doesn’t mean it wasn’t a mistake.
Today, we are reinventing ideas that ternary computing could have solved decades ago. In AI, we are struggling with how to efficiently simulate multi-state neurons on a binary system. In quantum computing, we are moving toward qutrits (three-state quantum bits)—which functionally resemble balanced ternary computing. In biological modeling, we keep running into the problem that living systems don’t operate in binary.
The uncomfortable truth is that we may have forced the wrong paradigm to become dominant, not because it was the best, but because it was the most economically expedient at the time. Had the Soviets been able to continue developing Setun, had balanced ternary computing evolved alongside binary instead of being discarded, we might be living in a different technological reality today.
And that’s the thought that won’t let me go. What if, instead of constantly forcing binary systems to do things they aren’t designed for, we had built our computational world on something more elegant, recursive, and aligned with the deeper numerical structures that govern reality?
What if Setun was not just an experiment, but a glimpse into the computational model we were meant to follow?
Why Setun Was Abandoned: The Tyranny of Market Standardisation
The fate of Setun was never determined by its technical merits. It wasn’t abandoned because it failed or because it was inefficient. It was abandoned because “the world” had already chosen binary computing—not on the basis of scientific superiority, but because of economic convenience and corporate inertia. The decision had been made long before Setun was even built, dictated not by innovation, but by “market forces,” industrial standardisation, and the interests of Western technology firms.
By the time Setun emerged in 1958, IBM and the Western computing industry had fully committed to binary-based transistor computers. This wasn’t just about technological preference—it was about control. Before and during World War II, American tech giants like IBM had actively collaborated with Nazi Germany, providing crucial data-processing systems that helped organise the logistics of the Holocaust. And yet, in the post-war years, these same corporations were not only forgiven but positioned as the global leaders in computing technology. The U.S. government and military-industrial complex, rather than holding these firms accountable, integrated them into its Cold War strategy, ensuring that their binary-based technology became the standard for global computing.
With that foundation in place, the economic forces of the West locked the world into binary before alternative models could even be explored. Corporate investment in computing prioritised what was already commercially viable, not what was scientifically optimal. It didn’t matter that Setun’s balanced ternary logic was more efficient—Western tech companies had already poured billions into binary infrastructure, and deviation from that standard was seen as a threat to profitability.
The Tyranny of Market Standardisation
Once binary computing had become the industry norm, it was nearly impossible for any competing paradigm to take root. The nature of capitalist research and development funding means that ideas don’t just survive on merit; they survive based on return on investment (ROI). Scientific breakthroughs that don’t promise immediate commercialisation rarely get funding.
This is the crux of why alternative computing paradigms like ternary logic were never explored further. Even though ternary systems could have been mass-produced and optimised, the initial cost of transitioning away from binary was considered too high—not in terms of technological feasibility, but in terms of corporate risk and profitability.
Silicon Valley had already mass-produced binary logic circuits, and shifting to ternary computing would have required a complete overhaul of both hardware and software.
Governments and corporations had invested billions in binary-based programming languages, infrastructure, and systems.
Any alternative model—even a superior one—was seen as commercially disruptive rather than innovative.
This isn’t an isolated case. Capitalist markets regularly kill off superior technologies simply because they don’t fit the economic model that has already been established. Planned obsolescence, suppression of alternative energy sources, and the monopolisation of industries all follow the same pattern—profitability dictates which ideas survive, while scientific merit takes a back seat.
What If Setun’s Model Could Be Mass-Produced Today?
Now, decades later, as computing hardware advances, ternary logic could be mass-produced at a lower cost than ever before. Modern manufacturing techniques have dramatically reduced the cost of logic gates, circuit designs, and processor fabrication. If ternary computing were revisited today, it could potentially offer:
Lower power consumption than binary systems, making computing more energy-efficient.
Faster and more efficient AI processing, better aligned with how neural networks actually function.
More natural integration with quantum computing, where qutrits (three-state quantum bits) align closely with ternary logic.
And yet, even with the technological barriers removed, the economic barriers remain. Corporate interests still dictate what gets funded and what doesn’t, and because the entire modern computing industry is structured around binary, the push to revisit ternary logic remains economically inconvenient rather than scientifically unviable.
This raises an even bigger question: how many other breakthroughs—technologies that could fundamentally reshape our world—have been buried simply because they didn’t fit the market’s immediate needs? How much knowledge has been lost, delayed, or suppressed, not because it wasn’t valuable, but because it wasn’t profitable?
Setun’s story is not just a footnote in computing history. It is a warning about the cost of letting markets dictate the course of human knowledge.
What Was Lost? The Future That Could Have Been
I've always loved speculative sci-fi—the kind that imagines futures where humanity has transcended scarcity, where machines handle the drudgery of life, and people are free to pursue knowledge, art, and exploration. As a child, I was captivated by the idea of the world we were supposed to have by now—the “Jetsons” future, where automation had eliminated unnecessary labour, and technology existed to serve humanity, not exploit it. But as I got older, that vision felt more like a broken promise. Instead of AI and robotics creating a world of abundance, they were being used to squeeze more profit out of workers, intensify surveillance, and maintain artificial scarcity. Instead of technology liberating people, it was being weaponised against them.
That disillusionment led me down a path of questioning everything I had been told about how technological progress happens. If we had the ability to automate so much, why was it being used to extract more from people rather than freeing them? If computers were advancing so rapidly, why was it that real material conditions weren’t improving for most? And the more I dug, the more I saw how capitalist economics actively suppresses the kind of technological development that could lead to that future of abundance. The history of Setun, and of ternary computing more broadly, became a perfect example of how a different future was possible—but was deliberately abandoned because it wasn’t profitable enough.
Had Setun’s balanced ternary logic been developed instead of discarded, our entire technological world might look radically different today. Artificial intelligence, quantum computing, and neural networks all struggle with the limitations of binary systems—problems that ternary logic might have solved decades ago. The forced rigidity of ones and zeroes makes it difficult to model organic, non-binary processes, which is exactly why modern AI research relies on layers of abstraction to simulate multi-state thinking. But Setun’s design—built on a natural three-state logic—could have offered a computational framework far better suited for AI and neural networks, one that mirrors the way biological systems process information.
Beyond AI, ternary logic could have made computing hardware far more efficient. The entire semiconductor industry has spent decades miniaturising transistors and squeezing more processing power out of binary architecture, but much of that effort has been a workaround for the inefficiencies of binary itself. If ternary computing had been the foundation, we might have developed processors that consume less power, run faster, and process complex data structures more efficiently. Instead of the endless cycle of forced obsolescence—where hardware must be constantly upgraded just to keep up with the inefficiencies of software bloat—ternary computing might have offered a more sustainable, scalable path for computing.
And that’s what haunts me. A completely different trajectory was possible. One where computing didn’t just advance in ways that were commercially viable, but in ways that were genuinely transformative for humanity. We could have had software paradigms designed for natural computation, rather than trying to brute-force everything into a system built on rigid binaries. We could have had hardware that evolved toward energy efficiency and adaptability, rather than one that maximised short-term profit for semiconductor manufacturers. We could have had a world where automation wasn’t a tool for extracting more labour, but for eliminating unnecessary work altogether.
This is part of a larger pattern. Capitalist economies discourage long-term, speculative projects, even when they could be revolutionary. The drive for profit demands short-term returns, forcing companies to focus on incremental improvements rather than radical breakthroughs. The need for market dominance forces early standardisation, making it nearly impossible for alternative paradigms—no matter how promising—to take hold. Anything that doesn’t immediately serve the existing power structures, that threatens the profitability of entrenched industries, is ignored, buried, or actively suppressed.
And so here we are, decades behind where we could be, still struggling with artificial limits imposed not by the laws of physics or the constraints of human ingenuity, but by the logic of profit. Instead of heading toward a future where machines do the heavy work and humans enjoy real leisure, we have a system that keeps people working longer and harder while automation is used to increase precarity rather than reduce it. Instead of a world of post-scarcity, we have artificial scarcity, enforced not by nature but by economic design.
Ternary computing wasn’t just an abandoned Soviet experiment. It was a glimpse of the kind of technological alternative that capitalism refuses to allow—one where efficiency is measured by how much it benefits society, not how much it maximises corporate revenue. It was a road not taken, and in its absence, we are left with the technological dystopia of endless extraction rather than the utopia of abundance we were promised.
China’s Partial Revival of the Soviet Scientific Model
The West loves to frame China’s economic model as some kind of paradox—an enigma that doesn’t conform to capitalist expectations. Analysts in London and New York constantly wring their hands over the Chinese stock market, lamenting how it never delivers the kinds of profits investors expect. But they fail to grasp an essential truth: that’s the point. China’s economic and technological landscape doesn’t exist to maximise shareholder returns—it exists to build infrastructure, advance long-term strategic goals, and sustain national development. It’s a model that, whilst not a direct continuation of the Soviet approach, carries echoes of state-driven research and investment that capitalism abandoned long ago.
Unlike the U.S., where corporate profit dictates the direction of research, China still heavily funds non-profit-driven scientific exploration, focusing on fields that may not offer immediate commercial payoffs but are deemed strategically essential. This is why, whilst Western R&D struggles to attract funding for anything outside the scope of immediate monetisation, China is able to invest billions into speculative, high-risk, and long-term research—the kind of projects that capitalism systematically rejects.
One of the clearest examples of this is quantum computing and quantum encryption. Whilst U.S. firms like Google and IBM have made progress in commercial quantum computing, Western investment in the field remains cautious and fragmented, heavily reliant on private sector funding and academic grants with strict oversight. China, by contrast, has committed vast resources to quantum research as a matter of national strategy, with state-funded labs, dedicated infrastructure, and a long-term vision. The result? China has already demonstrated breakthroughs in quantum communication, including the first quantum-encrypted satellite, something that has enormous implications for cybersecurity, surveillance, and secure communication.
The same pattern appears in nuclear fusion research—what China calls its “artificial sun” projects. Fusion energy, the holy grail of limitless clean power, has long been dismissed by Western investors as too costly, too uncertain, and too far from profitability. Whilst the West inches forward through private-public partnerships that move at the speed of bureaucracy, China has simply thrown state resources at the problem, building massive fusion reactors, increasing research capacity, and accelerating experimentation on a scale that no Western country would dare fund. They aren’t waiting for the market to make fusion profitable. They are forcing it into existence through sheer investment—just as the Soviets once did with their scientific endeavours.
And then there’s space colonisation, another area where China’s approach is completely different from the West. NASA, once a beacon of state-driven space exploration, has been gutted by decades of budget cuts, increasingly reliant on private corporations like SpaceX to maintain any momentum. The U.S. space programme now operates at the mercy of private interests, forced to justify every mission in economic terms. China, by contrast, has pushed forward with state-funded space exploration, building a modular space station (because they’re been shut out of the “International Space Station”?), launching lunar probes, and laying the groundwork for a permanent Moon base—all without needing to justify the immediate ROI. They are thinking in decades, not fiscal quarters.
Even in artificial intelligence, where Silicon Valley still dominates in raw technological capability, China’s approach is fundamentally different. Whilst Western AI research is scattered across dozens of competing companies, each hoarding data and algorithms for proprietary gain, China’s AI development is centrally coordinated and integrated into national goals. This allows for coherent, large-scale implementation of AI into infrastructure, governance, and industrial planning. Western analysts often frame this as authoritarian overreach, but they miss the larger point: China’s model doesn’t just produce innovation—it ensures that innovation is used to serve national objectives rather than just corporate shareholders.
This all ties back to China’s fundamentally different investment landscape. The world’s four largest banks are all Chinese state-owned institutions. They don’t operate under the same logic as Western financial institutions, which exist primarily to extract profit from every transaction. Instead, China’s banks function as extensions of national policy, directing capital toward projects deemed vital for technological sovereignty, infrastructure, and industrial expansion—even if they don’t turn a profit in the short term. This is why Western economists constantly fail to understand China’s stock market, complaining that it doesn’t deliver the returns that American investors expect. But that’s because it isn’t designed to. Unlike in the West, where financialisation has completely overtaken real production, China’s economy doesn’t exist to enrich speculators—it exists to build, develop, and maintain national self-sufficiency.
In many ways, this mirrors the Soviet model, where scientific progress was funded because it was important, not because it was profitable. The Soviets didn’t have to justify their space programme, computing research, or experimental physics in terms of quarterly earnings. Neither does China. And whilst the Soviet Union ultimately collapsed under the pressures of economic isolation and internal contradictions, China has adapted its model, learning from past mistakes whilst maintaining the core principle that some things—scientific progress, infrastructure, and technological self-reliance—are too important to be left to the whims of the market.
Meanwhile, the West continues down the same path of market-driven stagnation, where only research that turns an immediate profit is considered worthy of funding. This is why Western innovation is increasingly dominated by software, ad-driven algorithms, and financial speculation, while the harder, riskier scientific pursuits—fusion power, space colonisation, long-term AI development—are being left to China.
What this reveals is something deeply uncomfortable for the capitalist world: technological progress does not need the profit motive to exist. In fact, history suggests that the greatest scientific leaps often happen in the absence of market pressure, not because of it. The Soviet Union, despite its economic disadvantages, proved this by pioneering computing models, spaceflight, and nuclear technology. And now, China is proving it again. Whilst the West remains trapped in its cycle of short-term financial gains, China is playing the long game, building the future whilst Western analysts complain about stock market returns.
Final thoughts …
Setun’s fate wasn’t decided by its technical merits but by the economic structure of the world. It was a casualty of “market-driven science,” where profitability dictates what survives, and anything that doesn’t fit the existing paradigm is abandoned, no matter how promising. Capitalism excels at commercialising technology, but it stifles long-term, visionary research—especially when that research challenges the entrenched interests of the market. Had balanced ternary computing been allowed to develop, we might be living in a world with more efficient AI, sustainable computing, and an entirely different approach to digital logic—but instead, the economic demands of the time forced it into obscurity.
If humanity is ever to fully explore alternative technological paradigms, we must free scientific research from the constraints of short-term profitability. There must be a balance between state-driven investment in speculative research and commercial innovation, ensuring that knowledge isn’t discarded simply because it isn’t immediately profitable. Setun was just one lost opportunity—one glimpse of a future we never pursued. But it forces us to ask a much bigger question: How many other Setuns have been buried by the demands of capitalism? And what kind of world might we be living in if they hadn’t been?