Reading the World
Image: Enigma Records. 1987
A signal is a fragment of the world that someone has decided means something. The fragment could be many things: a patent filing, a policy draft, a subculture's new vocabulary, a product photograph, a phrase that starts showing up. Whatever it is, calling it a signal is an interpretive act. Someone has separated this thing from the ten thousand things around it and said, pay attention to this one. A person, working inside a tradition of reading, did that.
For the last fifty years, professional foresight practice has treated signals as the raw material of the work, collected into databases, sorted by framework, and used to populate scenarios, challenge assumptions, and keep conversations from drifting into extrapolation. Much of that craft describes signals as if they were objects in the world, waiting to be collected by a careful or curious observer.
This article wonders whether they are something else. It emerges from the process of rebuilding my own signals database, which turned out to be a convoluted and unsatisfying project, and from a question I considered during the rebuild. What if signals of change are readings rather than objects? The distinction is small on the page and large in the work.
I am writing this in the spring of 2026, in what Dr. Vivian Balakrishnan, Singapore's foreign minister, recently described as the end of an eighty-year period of American-underwritten globalization. It is a frank description from a serving diplomat. Speaking to Reuters in March 2026, Balakrishnan said the world order that produced an unprecedented run of peace and prosperity has ended. What comes next, in his words, is different. "Big powers and even lesser powers have a more narrow definition of national interest. They are willing to weaponize all levers in their hands. You look at the weaponization of currency, weaponization of technology, weaponization of critical minerals, weaponization of trade interdependency, which were supposed to keep us at peace with one another. Now it becomes yet another portal for exploitation."
The arrangements Balakrishnan describes are not new, but the reading of them might be. The same structural conditions that used to be interpreted as instruments of peace, on the old argument that countries bound together economically do not go to war, are now being interpreted as levers of coercion.
For decades a system based on UN Charter principles, multilateralism, territorial integrity, and sovereign equality produced an unprecedented run of development for the countries that plugged into it, with familiar costs to those left outside it. The Singapore story Balakrishnan cites, from a GDP per capita of five hundred dollars in 1965 to around ninety thousand today, is one version of what happened. China, South Korea, Germany, Finland, Ireland, Poland, Indonesia, and Vietnam each tell their own.
The rebuild work I was doing seemed curious in this moment. My approach needed to be different than what it was five years ago, when I started my first database with gusto, confident that more sources and better tagging would produce better readings. The world of 2026 has not rewarded that confidence.
When interdependencies are being weaponized, the groups that can read the world coming, early and in cultural context, might better navigate transformation. The ones that cannot read will be read by others. The capability depends, more than on any platform or tool, on the quality of the human interpretation that happens between a signal and a decision.
Any reading of the present is one among several the same evidence could support. This is what makes interpretation both powerful and vulnerable: a reading that becomes dominant becomes dominant for reasons that are not always reasons of validity. And interpretation is never closed.
I have done a reasonable scan of the signals collection tools available, and most of them were built for incremental competitive intelligence in a globalizing economy. I am not sure they will help small teams of skilled interpreters notice when the substrate they depend upon, from press coverage to shipping data, has been polluted.
But before thinking about what better tools might look like, I want to be clear about what I think signals are, what they are not, and what interpretation can do.
The weak signal as a concept enters strategic thinking in 1975, in a paper by Igor Ansoff titled 'Managing Strategic Surprise by Response to Weak Signals'. Ansoff argued that in turbulent environments, the signals strong enough for ordinary planning to register would always arrive too late. By the time a signal crossed the threshold of confidence conventional management required, the opportunity or threat was already there. His proposal was that organizations needed the capacity to act on ambiguous, incomplete, early information, not to predict with certainty but to hold themselves ready for multiple possible responses as the signal strengthened.
The idea traveled. Shell's scenario-planning group, which Pierre Wack came to lead, had reached a similar conclusion by a different route in the early 1970s. The OPEC oil embargo of October 1973 found Shell prepared in a way its competitors were not, and the case became foundational for the field of strategic foresight. Finland built futures research into the structure of the state from the 1990s onward, with the Finland Futures Research Centre at the University of Turku and the Committee for the Future at the Finnish Parliament both established in the early 1990s. Sitra, the national innovation fund, has more recently turned weak signals scanning a regular national exercise.
The French prospective tradition, associated with Gaston Berger and later Michel Godet, has been taken up most actively in Latin America, where Godet's methodology has shaped territorial foresight work across the region. Sohail Inayatullah, drawing on South Asian and poststructuralist thought, developed Causal Layered Analysis as a method less of collecting signals than of reading any given signal at four depths at once. At the Institute for the Future, founded in 1968 and now based in Palo Alto, signals became the raw material of an American foresight domain that emphasized imagination and scenario immersion.
Elina Hiltunen evolved Ansoff's idea in 2008 with what she called the three-dimensional future sign. A weak signal, she argued, has three parts. The signal itself, which is the thing you notice. The issue it points to, which is what you think it is about. And the interpretation that gives it meaning, which is why you think it is about that thing and what it might mean.
The third dimension is the one that matters most, because without it the first two are inert. A fragment of noticed information with no interpretation is just a clipping, a fragment with an issue-attribution but no argument about meaning is a tag. Only a multi-dimensional reading is a signal in the sense that anticipation requires.
This multi-dimensional reading is what the commercial tools struggle with. They capture the fragment and they are very good at attaching issues to it. Increasingly they also produce outputs that look like interpretation, generated paragraphs and risk scores and implication statements.
What is harder to produce is interpretation in the sense Hiltunen and others mean it, or an argument about inclusion and clustering that a reader could contest, edit, or develop further. That kind of interpretation is the hardest part of the work and the least automatable. Tools can either hand it back to the customer, or pretend to have done it.
Ask a foresight practitioner how they organize their signals and many will reach first for STEEP or PESTLE. Social, Technological, Economic, Environmental, Political. Or Political, Economic, Social, Technological, Legal, Environmental. These frameworks remind you to scan beyond your own sector and they produce a genuinely useful coverage check when you are worried you might be missing a dimension. I have used them. Many competent foresight practitioners use them.
They are also a philosophy. STEEP was formalized in American strategic management in the late twentieth century, when its assumption that the social, the political, and the technological were separable variables matched the apparent stability of the world it was built for.
Every categorization scheme is a theory of what the world is made of and how its parts relate. When you sort by litany, system, worldview, and myth, you are saying the world has depths, contingencies, and implications that surface descriptions miss. When you sort by push, pull, and weight of history, you are saying time is the primary axis and that signals belong to forces rather than domains. When you sort by actors and strategies, you are saying the world is a game and signals are moves. Choosing your categorization is choosing your philosophy, which is why I have lost so many delightful and dumb hours on this step.
None of these is the world. Each is a lens that makes some things visible and other things invisible.
The framework you use to sort your signals is one of the most consequential interpretive moves you make. The commercial tools mostly default to something comparable to STEEP, and in a polycrisis this works less well, because important action is happening at the seams and the economic, the political, and the cultural are being fused into single acts of coercion. I find myself curious about what a tool built around a different default, or built to hold several defaults at once, would make visible.
Three traditions show what choosing differently looks like. Causal Layered Analysis (CLA) asks you to read any given signal at four depths. As I understand it there is the litany, which is the official story and the headline. The systems, which are the structures and mechanisms producing the litany. The worldview, which is the assumption about how the world works. And the myth or metaphor, which is the deeper image the culture uses to make sense of itself. CLA can be uncomfortable for many Western strategy teams because the worldview and myth layers sound like humanities work, which they are. It says that if you do not read those layers, you are not actually reading the signal. You are reading the press release about the signal.
The French prospective tradition organizes less by categories of content than by actors and their strategies. What is this actor doing, what are they trying to achieve, who benefits if they succeed, who loses, what does this tell us about the game being played? Prospective treats foresight as a question of who is moving and why, and it can produce very different readings than STEEP does for the same events.1
Indigenous futures framings do not share the linear-time assumption to begin with, though Indigenous futures is itself a convenient shorthand grouping rather than a single tradition, and distinct knowledge systems have distinct methods. None of them shares the forward-projection assumption that nearly every commercial foresight tool is built on. This is not decoration but a different ontology, a different theory of what kinds of things exist in the world and how they are related, and it produces signal readings that a STEEP analyst might not reach.2
Categorization is the start of the work, not the end of it. The same is true of linking, the operation by which one signal is connected to another and a pattern starts to take shape.
Your framework, perhaps STEEP, can help you a sort a signal and say it is ‘environmental’. It cannot tell you, for example, that geoengineering decisions and a digital twin that will persist after its territory becomes uninhabitable are part of the same movement. That is an argument, and arguments are made by people. A foresight studio that treats categorization and linking as filing problems has offloaded the most compelling work to a function that cannot do it.
I want to look at three examples of interpretation, or what it looks like when someone reads at a layer the available tools are not reading.
The first is John Naisbitt. In 1982 he published Megatrends on the back of a plain method. He and his team, the Washington-based consulting firm Naisbitt Group, counted column inches in local American newspapers. The technique, called content analysis, was developed by theorist Harold Lasswell, who ran a wartime research unit at the Library of Congress that read societies through their press and broadcasts. Naisbitt picked it up in 1968 after leaving the Johnson administration.
His team worked through hundreds of daily papers, scanning fifteen million lines of newsprint a year, on the assumption that the local press is an honest record of what a place pays attention to and that aggregated shifts in that attention reveal what is actually moving in a society before the national conversation catches up. The book sold fourteen million copies, and on the structural calls he was largely right, with the shift to information work, the flattening of hierarchies, and the slide into a single global market all playing out roughly as he described, though the timelines were generous and a few of the smaller forecasts read as period pieces today. The fragments he gathered were signals at the point where he described them as signals, by arguing that newsprint was a record of collective attention, and that shifts in attention show up before shifts in behavior.
This approach probably cannot work the same way going forward. Naisbitt's method rested on the press being a relatively clear record of social attention. In 2026 that record is intentionally polluted. Thousands of AI-generated content farm sites operate across languages and geographies, with estimates by NewsGuard of three to five hundred new ones emerging every month, and leading AI chatbots now repeat distorted news claims more than a third of the time.
At the same time, prominent Silicon Valley venture funds are investing heavily in producing the news rather than just delivering it. MTS, short for Monitoring the Situation, launched on X in April 2026. It claims to be a continuous live-stream that promises to 'interpret' events in real time, though recognizing and amplifying are not the same as interpreting, and real time is itself a fiction that papers over the latency between event and reading. Objection.ai, launched the same month, is a platform that uses AI to adjudicate journalism for two thousand dollars per complaint.
The press Naisbitt was reading was an imperfect mirror of collective attention. The press an analyst will read in the future is increasingly an instrument shaping that attention from above. The signals are still there, but the source is no longer trustworthy in the way Naisbitt's method assumed, and the ambient noise is amped up. This is part of a larger pattern I will return to throughout a handful of essays: our information environment is not going to function as raw material for sense-making as we have known it.
The second example of signals read by people who were themselves inside the moment is Adam Kahane at Mont Fleur. Over three weekends in 1991 and 1992, with the apartheid government negotiating its own end, Kahane gathered twenty-two South African leaders outside Cape Town. The group included figures from the ANC, the National Party, the trade unions, and business, in most cases political opponents.
They produced four scenarios with animal names. Ostrich, where the white government refused to negotiate. Lame Duck, where a weak settlement hobbled the new government. Icarus, where a populist Black government spent beyond its means and crashed. Flight of the Flamingos, where the transition unfolded gradually and inclusively. Mandela, Mbeki, and Trevor Manuel, who later became finance minister, cited them in subsequent economic policy decisions, and Mont Fleur is widely credited with moving the ANC away from nationalization. The signals were not solely external indicators but the social possibilities the participants themselves carried. The scenarios were arguments about what those possibilities meant.
The third is Jane Jacobs on Hudson Street. She was a journalist living in Greenwich Village, and her 1961 reading of cities is one of the great interpretive stretches of the twentieth century. While the planning profession was confident, well funded, and operating on the largest scale it had ever worked at, Jacobs examined a New York city block by watching small interdependencies.
The instruments of urban planning, a profession explicitly concerned with desired future states, were not designed to detect daily cadence, or the sidewalk ballet as she called it. They were designed to plan at scales where that kind of ecosystem was almost invisible. The signals that turn out to matter are often beneath the resolution of the tools, not because the tools are bad but because the tools were built around a certain premise about what the relevant scale was. Sixty years later, the layer she was reading has a name in the foresight literature, which is the kind of confirmation interpretive work occasionally receives, late.
Consider a more contemporary failure by way of contrast. A great deal of the work that powers AI systems comes through data-labeling pipelines that route tasks to contractors around the world, often at piece rates that do not support sustained attention. The contractor is asked to label, paid by the piece, at a rate calibrated to time rather than to an interpretive thought. 3
A worker tagging cultural content under those conditions might watch a Kurosawa film and tag it samurai. The tag is accurate: Seven Samurai does contain samurai. It is also a meditation on class, professional obsolescence, the dignity of defeated people, the ethical costs of violence, and the relationship between protectors and protected. The tag captures none of that, because the pipeline was not asking for a reading. The labels feed recommendation systems, categorization, and the training sets of the models that will shape what millions of people absorb next. The whole program forgets metaphor, allegory, symbolism, and subtext entirely, and runs on surface descriptions of things that exist only because they have depths.
This is a decent working definition of the gap between collection and interpretation, and of why the distinction matters. A tag is a fragment with an issue-attribution. Scale the tagging operation up and you do not eventually get interpretation. You get a larger archive of tags.
Beyond categorization, the other core operation inside a signals database is linking. Saying that signal A is related to signal B, that this filing is connected to that vocabulary, that this speech echoes that subculture's invention, and so on. Linking looks clerical and is often described in product documentation as clustering, which sounds like something the software does while you are at lunch. But it is not clerical, not at all, and I think it is among the highest-interpretive acts inside the whole cycle.
When you link two signals, you are making an argument about what is going on in the world. You are saying these two things, which surfaced from different sources at different times in different registers, are part of the same underlying movement. That claim is contestable. It is also where a great deal of the intellectual value of foresight actually lives. A team that has developed a rich, shared, contested vocabulary for how signals relate to one another has built something nearly impossible for a competitor to replicate. Another team could scrape the same sources and capture the same fragments but they could not, without years of shared practice, produce the same linkings.
Linking signals by vector similarity is the dominant approach in current AI-enabled foresight and scanning tools. The mechanism is straightforward enough to describe: each signal is converted into a numerical vector, an embedding with hundreds or thousands of dimensions, and the tool measures how close two embeddings sit in that high-dimensional space, usually by cosine similarity. If they are close, the tool calls them related.
Tools that wrap large language models on top of their signal stores are doing something more complex than pure vector similarity. Most of them use retrieval-augmented generation, which means the system retrieves signals using vector similarity, then passes the retrieved signals to a language model that produces a response. The opacity problem is the same or worse, because now there are two layers of opacity: the retrieval and the generation.
The dimensions of the space do not correspond to anything a human reader could name. There is no axis for political register, no axis for cultural resonance, no axis for the kind of sideways or weird connection an interpreter might make. The space is meaningful to the math and blurry to the reader.
Recent interpretability research can sometimes recover meaningful structure from these spaces, but the techniques are not yet available in any commercial foresight tool. Some current tools do generate plausible-sounding explanations of why two signals were clustered together, but those explanations are produced by a separate process from the clustering itself, and there is no guarantee that the prose describes what actually drove the similarity score. The result of any of this may or may not be useful. What it cannot do is carry the interpretive weight of a human linkage, because the operation is not available for contest.
The value of a signals database, in this view, is less in its storage than in the quality of its linking vocabulary. An organization whose linking is alive has a living practice, while an organization whose linking has been outsourced to a function it cannot inspect may be holding less than it thinks it is.
Through the 1970s and 1980s, signal collection was a human and institutional activity. Marshall McLuhan had proposed, a decade earlier, that artists were a society's distant early warning system, borrowing the phrase from the radar line that ran across the Canadian Arctic during the Cold War, on the argument that artists notice changes in the perceptual environment before the rest of a culture catches up. RAND Corporation's Delphi panels polled experts in successive rounds. Clipping services, staffed by researchers, sent clients folders of news articles they had determined were worth noticing. Government departments and large companies maintained in-house libraries with subscriptions to insider publications most readers would never have heard of. Shell's scenario-planning group reached its famous conclusion about the 1973 oil shock through this kind of curated reading. The Institute for the Future was among the first organizations to treat the reading of the future as a professional discipline rather than a side activity.
By the 1990s the practice was changing under the weight of the web. Source material went from scarce to abundant, quickly. Natural language processing matured enough to support scanning at scale. Singapore opened its Scenario Planning Office in 1991 inside the Prime Minister's Office, drawing directly on Shell's methodology. Then in 2004, the same office launched the Risk Assessment and Horizon Scanning platform, RAHS, developed for use by the Singapore civil service. RAHS remains one of the most sophisticated government foresight systems anywhere in the world.
The 2010s were a period of proliferation. Shaping Tomorrow launched its AI named Athena in 2013, making it one of the earliest commercial AI-enhanced foresight platforms, though the system is still primarily a sophisticated keyword and dictionary matcher rather than a vector-based system. ITONICS was founded in 2009 in Germany, growing out of an innovation management tradition. Futures Platform was founded in Helsinki in 2016. FIBRES followed, also in Finland.
Then there is the threat and security intelligence side of the market, which is a different category but is often intertwined with foresight: Recorded Future, founded in 2009 in Massachusetts with early funding from Google Ventures and the CIA's In-Q-Tel; Dataminr; and Seerist, formed more recently through a 2022 merger. The market and consumer intelligence side, which is also adjacent rather than identical to foresight, produced Quid, Canvas8, Mintel, and CB Insights.
The 2020s brought large language models into the stack. This was a real change, and I do not want to be dismissive of it. LLMs made summarization routine. They made plausible-sounding interpretation routine as well, which is a different thing. They also made the auditability question, which had always been present, more visible. If you ask a current commercial foresight platform why it surfaced a particular signal, the answer you get is likely a summary of the signal itself or a model-generated sentence that sounds like reasoning. The platform does not easily tell you which of the thousands of other signals it considered and rejected, or why.
What has genuinely changed over fifty years is that the volume of source material has gone from scarce to effectively unlimited. Interpretation remains the scarce asset, and interpretation is still hard and the tools have not made it easier.
So we have inherited a fifty-year stack of tools, each built for the world that produced it. The world that produced them has changed, and the gap between the world we are now trying to read and the instruments we are trying to read it with is the substance of what follows.
During a global re-ordering, at least one more observation applies: the useful horizons compress. A scenario exercise that would have made sense at a fifteen-year horizon in 2015 may need to be redone at a three-to-five-year horizon in 2026, because the underlying assumptions are shifting faster than they used to. I don't think this means foresight has become short-term, but that the scanning, scenario, and interpretation cycles have to turn over more frequently, and the database has to support being revisited and re-read in light of assumptions that were valid last year and are not valid this year. I can see how libraries of trends will become less useful in this environment, and living practices might become more useful.
Signals, as I see it, are most useful in the early divergent phases of a foresight project. It's good to be specific about this because some of the reputational difficulty foresight can have inside organizations comes from signals being deployed, for lack of a better term, where they were never going to help. They are useful for keeping a scenario exercise honest, for challenging assumptions, for populating the edges of the possibility space. A strategy team that does not look at signals before it starts drafting will write scenarios that extend the present, while a team that reads signals carefully will write scenarios that can be surprising.
Signals are much less useful in the convergent phases of a project, where you are concerned with prioritization, the strategic decision, the roadmap. By the time a group is choosing what to do, signals have probably done their job. I haven't seen a good argument for scanning past the point where more signal material would help. This sounds like a way of postponing a decision that is uncomfortable. It is the decision-maker's difficult job to decide while the picture is still cloudy; it is not exactly the signal's job to clarify a decision.
Signals can be close to useless in quarterly planning and in most forms of short-horizon competitive intelligence. Real-time intelligence platforms exist to handle short horizons well. Foresight platforms answer a different demand, so when they are pressed into tactical service they might disappoint someone who expected them to 'answer' a different question.
The signals collection methods worth describing are the ones that surface things genuinely outside the current frame, rather than rearranging the present into variations of what is already assumed.
What follows is a scan of those methods. Some are operational methods that produce specific kinds of signal, others are interpretive frames that change how any signal is read.
Experiential futures inverts the signal question entirely. Instead of reading signals the world has produced, you produce signals yourself, as probes, and read the reactions. Stuart Candy and Jake Dunagan have made this a serious method, with projects placing artifacts from imagined futures into ordinary public space and watching what people did with them. It is almost entirely absent from the commercial scanning tools, because it does not fit the scanning paradigm at all.
Censorship reading is one of the highest-value signal classes also absent from the marketplace's scanning options. What a state removes from public discourse tells you much about the movement of the state, which is not always what we assume. Projects like Weiboscope, GreatFire, and China Digital Times' archive of leaked propaganda directives have been documenting what the Cyberspace Administration of China takes down for over a decade, and the pattern of those takedowns has predictive value that the takedown event itself does not. In the months before the 2021 tech crackdown, the censorship signature shifted in ways visible to researchers paying close attention to Chinese-language traffic and largely invisible to the foresight platforms reading English-language commentary about Chinese policy.
This applies well beyond China. If you are reading English-language coverage of a non-English environment, you are reading the second derivative of what is happening, processed through translation, opinion, and time. The signal lives in the original-language traffic, including the parts that have been deleted, and reading it requires a person who can read it.
Close reading of bureaucratic language is in the same family. Kremlinology, the Cold War practice of reading Soviet intentions from the placement of names in Pravda and the order of officials at state ceremonies, was thought to be obsolete after 1991 and has come back, particularly for China analysis but increasingly for Russia, North Korea, and Iran. Reading Xi's speeches in the original Mandarin produces different readings than the Xinhua English summary produces, and much was legible months or years before Western analysts caught up, if you were doing the textual work.4
The same discipline applies to American executive orders, where the press release and the operative text often diverge in ways that reward the same kind of reading. Reading bureaucratic prose carefully is not a foresight method in the canonical sense, but in a polycrisis it is an available signal source, and almost none of it is automatable in the way the platforms would need it to be.
A related approach begins with refusing the assumption that technology is one global thing that different countries adopt at different speeds. The Hong Kong philosopher Yuk Hui, who works in the tradition of philosophers who treat technology as inseparable from the culture that builds it, has spent the last decade arguing that every technology carries the cosmological assumptions of the culture that produced it, meaning the culture's underlying picture of how the world fits together and what humans are for. He calls this cosmotechnics, and the upshot is that there is no single modernity arriving everywhere on a delay. There are multiple modernities, each shaped by a different inheritance.
The foresight consequence is pertinent. If you assume Chinese AI development is American AI development with a delay, you misread the DeepSeek release of January 2025, which was visible in the Chinese open-source community for months before it startled the American press. If you assume Indian software is offshore American software, you miss what India started building in 2009. For thirty years the Western mental model of Indian software was outsourcing, the back-office work of American companies done more cheaply in Bangalore and Hyderabad. In 2009, the Indian government and a group of technologists began building something that does not fit that model at all. The India Stack is a combined national digital identity system called Aadhaar, a public payments rail called UPI, and a consent-based layer for sharing verified documents, built as a public utility. The live signal in 2026 is the export: India has been actively offering versions of the stack to other countries, now being piloted across Africa, Southeast Asia, and Latin America.
There is a class of signals I think of as leaving signals, meaning what people who have recently left a place say about where they came from. This has a long history and almost no presence in commercial foresight. Hong Kong residents and tech workers who relocated to Singapore, Tokyo, and Dubai between 2020 and 2023 have articulated things about the regulatory environment they left and what has changed there. Chinese tech workers who left Silicon Valley to return to China during 2024 and 2025 have done the same about the conditions they left in the US. The émigré press is one of the few places where a polarized information environment partly self-corrects, because people who have moved have less reason to perform loyalty to a narrative they have left behind. Reading this kind of signal requires what every method on this list requires, plus a feel for the ritual life, the social memory, and the contextual conditions of a place.
Feminist and womanist traditions of reading have been doing signal work for forty years, and most of it has not crossed into commercial foresight, which is its loss.
Patricia Hill Collins's idea of the outsider-within, from Black Feminist Thought in 1990, is itself a theory of signal reading. The argument is that people inside an institution but not of it, in her case Black women in white-dominated professional settings, see things that insiders cannot see and outsiders cannot access, and that the standpoint produces a form of knowledge unavailable from any other position.
Donna Haraway's situated knowledges argument from 1988 makes the case broadly that the pretense of reading from nowhere, what she calls the god trick, produces worse readings than reading from somewhere clearly named. The signals collection platforms of today seem to perform the god trick by design. The contradiction is most visible in something like Monitoring the Situation, which launched on a platform marketed as 'the everything app', as if a tool hosted on a self-described totality could plausibly read the situation from outside it. A practice that takes situated reading seriously starts from the position of the reader, names it, and treats that naming as part of the method rather than as a contamination to be controlled for.
Vernacular signals scanning in original languages is related to censorship reading but distinct. The vocabulary a culture invents to describe itself changes quickly when that culture is under pressure, and words change specifically to evade pressure. The slang younger urban Chinese use on Weibo and WeChat went from involution (neijuan 内卷) in 2020 to lying flat (tang ping 躺平) in 2021 to let it rot (bai lan 摆烂) in 2022. Involution described the experience of working harder and harder for diminishing returns. Lying flat described the response of opting out of the competition entirely. Let it rot described a darker resignation, doing the minimum and watching the system fail. These are three names for societal predicament, a generation describing its diminishing prospects in language that keeps shifting because each new word gets noticed and constrained or diluted, and a new one has to be invented.
These are legible signals available about how a society is metabolizing its own situation. The words arrive in Western press coverage months later, by which point any signal value has decayed, the population has invented the next term, and a foresight practice depending on translated coverage is reading conditions already in mass motion, in hindsight.
Capital projects, construction, supply-chain rerouting, and migration are physically observable signals of change. I worked for decades in logistics and global flows, watching how and why and where things, people, information, and money move, and what their movements reveal. This is still where I spend a lot of my attention.
For most of those decades, this kind of reading was the most concrete method available. Ships reported their positions and construction showed up in satellite imagery and refugee needs illuminated absolute, grim, truths. The physical reality was harder to falsify than any rhetoric on top of it, which is why, for me, it granted an immediate advantage in clarity — unbendable signals.
That has changed. Tankers carrying sanctioned cargo now routinely spoof their positions, swap identities, and turn off transponders. In some cases the records show vessels apparently sailing across deserts or hilariously stacked in landlocked airports. Decoys, masking, timing spoofing, telemetry manipulation, sensor disruption, synthetic voice signals and much more echo the noise saturating the news environment. The practitioner working in this space who aims to do sense-making will need a discernment for deception and an appetite for absurdity that the practitioner of twenty years ago did not need to develop. The same shift is visible across the information environment.
What is common to everything listed above is that the human reader is irreducibly the instrument, not because automation is impossible but because the work is interpretation, and interpretation is what people working inside traditions of media, text, and cultural literacy do. The platforms can fetch and compare and surface. They cannot read, in the sense this essay has been trying to define. The methods worth investing in are the ones that put the reader at the center of critical selection and reasoning, and build outward from there.
This is where the philosophical argument narrows to become a claim about practice. The kind of consulting I want to describe here and in my following essay is deliberate, culturally situated, and committed to interpretive gravity over coverage.
At first glance most commercial foresight scanning systems are sold as a volume service. Something like, we will scan more for you than anyone else can. We will surface more signals and we will process more sources. The assumption is that the limit of good foresight is coverage, and that if you just cover enough of the world, the important patterns will emerge.
For the kind of practice I’m interested in, signals function not solely as raw material to be processed at scale, but prompts for shared reading. The consultant or collaborator brings perception, a set of interpretive traditions, a deep familiarity with the cultural register of the client's domain, and a working relationship that lets understanding the inquiry question itself develop alongside the reading. The idea is to surface collective intelligence.
The signals database in this mode is less like a data warehouse trying to be comprehensive, and more like a common and kept record of what you have read together, what you have called important, and what you have learned to read differently over time. It accumulates interpretive wealth. The distinction is roughly the one researchers in qualitative methodology have made between big data and deep data, between knowing what is happening at scale and knowing what it means at depth.
Broadly generalizing, the scaled tools were built for a world where culture was an elastic layer sitting on top of coherent economic and political systems. The tools available now are largely retrofitted for the new conditions, even as they market themselves as current.
Part two of this series, The Nerve Center, develops the practice and describes why it has to start now. Part three, Legible Machines, describes a tool that might actually support it.
2026