The Nerve Center
Image: Yukimasa Okumura. Design for Haruomi Hosono’s album S-F-X. 1984
Part Two
Part One, Reading The World, ended with a claim I cannot exactly prove, which is that interpretation is where foresight work “lives”. This essay is what happened when I tried to find a tool for collecting and organizing signals of change that believed the same thing.
I describe four things in this note. My view of what signals work is, how the practice is changing under current conditions, and what the market currently offers, organized by what the tools actually believe about interpretation rather than by when they were founded. And one idea from machine learning that I am thinking about, because it might be the shape of a different kind of tool.
Ansoff's original weak signal, the incomplete piece of early information that hinted at a discontinuity, comes from a world that was in some sense “stable” or legible enough to have a normal course of events (signals) to deviate from. The polycrisis has caricatured and distorted that legibility. The US-Israel-Iran war has reshaped the way people write about energy, supply chains, trade, theology, peace, and security swiftly. Currency, technology, resources and human lives have been cast as leverage. In that twisting of context, much noise might look like a signal. What a signal is not becomes blurry.
A direction of movement extrapolated from the recent past is a trend, not a signal. Electric vehicle adoption is a trend. Remote work is a trend. Most of what can get confused with foresight is forecasting, which is the business of estimating how far and how fast a known direction will continue. A prediction market is something different again. It is gambling: a crowd bets on a named outcome, and the price aggregates what the crowd is willing to lose. Both are what they are, inside their own limits. A signal is a fragment of the present that does not yet belong to a named trend, because the question it belongs to has not precisely been asked.
There is a harder problem underneath this one. A trend is something you discover by watching the world. But in 2026 it is increasingly something you can manufacture if you are large enough.
Bill Gates is one visible example of a larger pattern. He is the single largest private owner of American farmland, with about 275,000 acres across 19 states, which is a small percentage of the roughly 900 million acres of U.S. farmland in total and an outsize presence in public attention. The same handful of actors and platforms who own significant farmland also own, or influence, the seed and agrochemistry (Bayer-Monsanto), the equipment and telemetry (John Deere's data layer), the logistics and retail (Amazon, Walmart), and the delivery apps (DoorDash, Uber Eats). When the seed, the soil data, the grocery shelf, and the delivery are controlled by the same small circle of companies, the direction of movement in food is wherever the owners of the pipeline have decided to steer.
The forecaster who extrapolates this trend is reading the steering almost like it were weather. The person or agent collecting signals is asking who is steering, what they are steering toward, and what fragments in the present might indicate a shift the owners have not yet announced. This last point is one of the reasons the interpretive work cannot neatly be automated. The fragments are where the unadvertised strategies show.
The strongest foresight teams, as far as I can tell, have been ahead of both problems. The outline of their response is worth describing.
Japan's National Institute of Science and Technology Policy runs its foresight work in three interleaved registers. The Delphi survey gathers expert consensus, and the horizon scan captures weak signals. The vision work asks the normative question about what kind of society is worth building. The eleventh exercise, published in 2020, did all three and published them separately so that each could be read against the others. This structure refuses to collapse the three questions into a single output: put them side by side and the gaps between them could themselves be signals.
Policy Horizons Canada has taken a different angle. Rather than building a platform, they have tried to describe foresight as a set of teachable competencies, ten of them, ranging from information collection and analysis to synthesis, sensemaking, storytelling, facilitation, systems thinking, and futures thinking. The underlying bet is that foresight is a craft with nameable skills rather than an innate sensibility, and that diffusing those skills across the senior civil service matters more than concentrating them in a single office. Singapore has made a parallel play for decades, rotating officials through the Centre for Strategic Futures before posting them elsewhere. Both countries have decided that foresight lives in people, not in platforms.
Latin America is embedding foresight inside parliaments rather than inside executive planning offices. The First Regional Conference of Parliamentary Committees of the Future met at ECLAC in Santiago in June 2024, with representatives from Argentina, Brazil, Chile, Colombia, Costa Rica, the Dominican Republic, Mexico, Paraguay, and Uruguay. At least one underlying theory is that reading the future is part of deliberation rather than administration, which is a distinct claim about what foresight is for. I am curious how that difference plays out over a decade.
The South African Mont Fleur scenarios from 1992, facilitated by Adam Kahane, are often cited as one of the most influential foresight exercises ever conducted, because they shaped how the post-apartheid transition was imagined. The method was participatory scenario work, beyond signal scanning. Since then, civil society organizations across South Africa, Kenya, Tanzania, and Nigeria, have run public interest foresight exercises through the Institute of Economic Affairs, the Society for International Development, the African Leadership Institute, LEAP Africa, and Twaweza. The standing institutional work happens at the Institute for Security Studies in Pretoria, whose African Futures programme runs a continuous forecasting platform. The pattern across the continent is that foresight capacity lives in civil society, universities, and think tanks rather than inside central government, and the work tends to be participatory rather than technocratic. If I compare this model to Singapore's I suspect it reflects something about where institutional trust sits in different parts of the world.
The academic methods have been moving in the same direction as the institutions. Inayatullah's Causal Layered Analysis asks the reader to hold four depths at once, litany, systems, worldview, and myth, which is a way of acknowledging that a signal does not have a single meaning. Three Horizons, developed by Bill Sharpe, treats signals as artifacts of the transition between a declining present system and an emerging future one. Narrative foresight, in the work of Cheryl Doig and others, reads the architecture of stories rather than their frequency or sentiment, which is how the slowest-moving and most consequential shifts tend to register. The French prospective tradition, going back to Berger and Godet, organizes the analysis around actors and their strategies, which is closer to how a diplomat or an intelligence analyst reads the world.
I see two threads running through all of this. The first is a retreat from the idea that a signal has a single meaning. The second is the relocation of interpretation back into the human practitioner, working inside institutions that treat reading as a craft. Both threads run against the assumptions most commercial tools were built on.
The commercial foresight market looks less confusing when you organize it by what each tool thinks a signal is and who it thinks should interpret it. Four rough clusters, plus the national systems that are not for sale.
The first cluster believes the machine interprets, at short horizons. Recorded Future, Dataminr, Seerist Cassandra, Meltwater. These are threat and intelligence tools. They scan news, social media, satellite imagery, and open-source intelligence at high speed and surface what just happened or is about to happen. The machine does the reading, the human reacts. The horizon is hours to weeks. Recorded Future aggregates over a million sources at enterprise prices that usually start around sixty thousand dollars a year. Dataminr has an exclusive integration with X. Seerist Cassandra, from about fifty thousand dollars a year, fuses multiple data types for geopolitical risk teams. These tools do their job well. Their job is not quite foresight.
The second cluster believes in-house analysts interpret, at medium horizons. Futures Platform, Shaping Tomorrow, FIBRES, ITONICS. The tool provides content or scanning, but the interpretation is written either by the vendor's own futurists or by the customer's foresight team. Futures Platform publishes at 14,900 euros a year and is built on a library of roughly fifteen hundred trend analyses by in-house futurists. Shaping Tomorrow runs an AI called Athena that scans around eighty thousand sources. FIBRES starts at 960 euros a year and offers a free tier for educators. ITONICS is the most complete innovation operating system in this group. These four share a methodological seriousness you do not find elsewhere, and all four were built for a world that is now visibly reconfiguring.
The third cluster believes the customer interprets, at whatever horizon they need. Quid, CB Insights, Mintel, Canvas8, TrendWatching, Trend Hunter. These produce data, reports, or semantic maps, and leave the meaning to the buyer. Quid runs semantic topic modelling over patents and news and is best known for its network graph visualizations. CB Insights is the definitive tracker of startups and venture rounds. Mintel is the consumer packaged goods authority. Canvas8 reads cultural signals through a behavioural science lens. Each is useful inside its niche. None is methodologically foresight-native. They are market research and consumer insight, repositioned.
The fourth cluster believes the community interprets, at long horizons. The Institute for the Future's Future Factors, offered to its Vantage partners, pairs a signals database with a moderated community of practitioners. The practice is explicitly social. A database of signals read collectively, argued over, revised. This is the premise I found most honest anywhere in my scan, because it treats interpretation as something that happens between people rather than inside a machine.
The national systems operate on a fifth premise, which is that the state interprets, across multiple horizons, in service of sovereignty. Singapore's RAHS and Centre for Strategic Futures. China's Five-Year Plan apparatus, now in its fifteenth iteration covering 2026 through 2030. Japan's NISTEP, described above. Russia's iFORA at the Higher School of Economics. Canada's Policy Horizons. These are not products. They are political theories of where foresight should sit.
A few observations fall out of this grouping. The first is that buyers typically pay for three or four tools from different clusters to approximate one workflow, because no single cluster covers the practice. The second is that the gap I went looking for, a tool that believes the skilled human reader interprets, at any horizon, across text and image and object, with interpretive vocabulary accumulated across years, does not fit inside any of the four commercial clusters. The closest is IFTF's community model, which comes nearest because it at least locates interpretation in a human practice rather than in an algorithm. The third is that the national systems are more sophisticated than anything on sale, and the fact that they are unavailable to anyone outside their national contexts is itself worth noticing. Sovereign foresight capability is becoming a category of state power.
What AI is currently doing, and what Mythos implies
Large language models have changed what signal scanning feels like in practice. Summarization is now almost free. Translation is good enough that a Chinese policy document can be read in English in seconds, with the usual caveats about idiom and register. Clustering by vector similarity is efficient, which means large bodies of text can be sorted into topic groupings without manual tagging. These are real improvements and I do not want to minimize them.
What they are not is interpretation. A language model that produces a summary of a signal can write a sentence that sounds like reasoning without doing the reasoning. Vector similarity says two signals are close in a high-dimensional space the user cannot inspect, which means the machine has decided two things resemble each other for reasons it cannot show you. The output of both looks like reading but is not. A practitioner using these tools has to remember, constantly, that the machine is generating plausible-sounding interpretation, which is a different category of thing from actual interpretation and harder to catch than the old failures were. The old tools failed obviously. The new tools fail convincingly.
In April 2026, Anthropic announced Claude Mythos, a frontier model capable of discovering and exploiting software vulnerabilities at a scale and speed no human team can match. Mythos can take a CVE identifier and a commit hash and produce a working exploit within hours. It can chain multiple vulnerabilities together, reverse-engineer closed-source binaries, and, according to the accompanying red team write-up, autonomously find thousands of zero-day vulnerabilities across major operating systems and browsers. The company has launched Project Glasswing to get these capabilities into defenders' hands before they reach attackers. The timeline Anthropic gives for comparable capabilities reaching open-weights models, where anyone can run them locally, is twelve to eighteen months.
Mythos is specifically a cybersecurity development, and it is the one we happen to know about. Systems with similar capabilities aimed at other domains are either already here, unannounced, or coming. The reason Mythos or anything like it matters for signals work is that it changes the signal-to-noise ratio in every domain that deals with documents, code, or text at scale. When a machine can read an entire codebase at machine speed and find every exploitable pattern, the equivalent applied to policy documents, financial filings, academic papers, or public communications produces either a flood of spurious signals or a small number of very good ones, depending entirely on the quality of the reasoning layer on top. Most current AI tools do not have a reasoning layer you can audit. They have a pattern-matching layer and a generation layer, with nothing in between that a human can argue with.
This is where I want to spend the rest of the essay, because the question that matters is not whether AI will reshape signals work. It already has. The question is what kind of AI serves reading rather than replacing it.
Tsetlin Machines, and what makes them different
The Tsetlin Machine is a machine-learning method developed at the University of Agder in Norway by Ole-Christoffer Granmo, starting around 2018. It is not a neural network. It learns in a fundamentally different way.
A neural network learns by adjusting millions or billions of numerical weights inside layers of interconnected nodes. When the network classifies something, the reason lives in the weights, which are not human-readable. You can inspect a weight matrix and see numbers, but the numbers do not form an argument. An LLM generating an explanation for its output is producing a plausible-sounding sentence, not a transcript of its actual reasoning. This is the opacity problem that has dogged machine learning for a decade.
A Tsetlin Machine learns propositional logic clauses. Human-readable AND, OR, and NOT rules, built from discrete input features. After training, the output is a set of clauses you can read with your eyes. Something like: IF feature A is present AND feature B is absent AND feature C is present, THEN classify as pattern X. The clauses are not an interpretation of the model's reasoning. They are the reasoning. You can inspect them, argue with them, edit them, and retrain the model with your edits.
This matters for signals work because it matches the shape of the interpretive problem. When a current AI-enabled foresight tool flags a signal as important, nobody can audit why. The system has pattern-matched against an embedding that is not human-readable, and the output is a score the team has to take on faith or a sentence the LLM has generated after the fact. A Tsetlin Machine could, in principle, let the system say something like: this signal was flagged because it combines an organic metaphorical register, non-OECD geographical origin, and a sacred-language incongruity, which is the same pattern we saw before the last shift in this sector. That is a sentence a human reader can engage with. The human can push back, edit the rule, or ask the machine to find other signals matching a slightly different clause.
What draws me to the approach is the match between the Tsetlin Machine's need for discrete meaningful input features and the vocabulary of a serious interpretive practice. The input features for a Tsetlin Machine have to be things, individual binary or categorical properties of the data. In a signals context, those features could be metaphor, symbol, allegory, narrative archetype, register incongruity, simile drift, geographic origin, discourse layer, actor type, temporal framing. These are not the vocabulary of bag-of-words text mining. They are the vocabulary of someone who has been trained to read culture. A person with that training could design a feature ontology that a machine-learning practitioner might never think to construct, and the machine, trained on those features, would produce rules in the same vocabulary. The human and the machine would be speaking the same language.
I have not trained one. The literature is still young, the tooling is less mature than PyTorch, and performance on messy natural language is not yet the best in class. The approach would almost certainly need to sit inside a hybrid architecture, with language models doing initial feature extraction from raw text and Tsetlin Machines doing the interpretable reasoning layer on top. I am interested in pushback from anyone who has actually built one. What I am less interested in is the version of the argument that says the opacity of current AI is fine because the outputs are good enough. In a world where Mythos-class capabilities are arriving across multiple domains, opacity is not a tolerable property of a tool that claims to help you read.
The broader point, the one the Tsetlin Machine only partly illustrates, is that the next generation of signals tools will have to be legible. Not because legibility is a humanist preference. Because the alternative is a practice in which the human is a reviewer of machine-generated interpretations they cannot audit, and that is not foresight. That is automation wearing foresight's clothes.
The future reality of doing this work
I have been rebuilding a signals database for about a year. I am worse at it than I was five years ago, in the sense that I have fewer certainties, and I am better at it than I was five years ago, in the sense that I can feel when a signal is doing more than one thing. Most of the time now I am reading a fragment and asking what it is saying underneath what it is saying, and I am using the machine for the things the machine is good at, which is finding more fragments and translating them, and not using the machine for the things it cannot do, which is telling me what the fragment means.
The future reality of this work, as far as I can see it, has a few features. The tools will get better at reading, but they will get better at the layers of reading the humans are least excited about. Volume scanning. Translation. Clustering by surface features. Pattern matching against known patterns. What they will not get better at, at least not through the current architectures, is novel interpretation. A signal that matters is usually a signal that does not match a previous pattern, and pattern-matching machines struggle with precisely those.
The humans who continue to do this work will do so in smaller and more skilled teams, with longer memories, accumulating interpretive vocabulary over years. They will work in closer contact with their material, reading buildings and advertisements and product forms alongside documents, because the text-only paradigm cannot hold what is now happening visually. They will treat their interpretive frameworks as living traditions rather than as proprietary methodologies, borrowing freely from Causal Layered Analysis, narrative foresight, French prospective, indigenous futures, and whatever else the moment calls for. They will use machines for leverage rather than for authority.
The institutions that continue to support this work will look less like software platforms and more like commonplace books, workshops, long-running study groups. The national systems already understand this. The commercial market is catching up slowly, if at all. The gap is where an honest consultancy could grow for a long time, because the demand for serious reading is rising faster than the supply.
What I suspect about the next decade is that reading the world well will become a sovereignty capability in a sense the last generation of foresight language did not quite capture. Not reading as in data collection, which states and corporations have always done. Reading as in the work of deciding what a fragment of the world means, when the world is using fragments as weapons. That capability will live in skilled people, supported by legible tools, embedded in institutions that treat interpretation as craft. Some countries are already building it. Some organizations already have it. Most do not know they will need it.
I do not know if the rebuild I am doing will be finished in a year or in five. I know the tool I want does not exist yet. I know the practice I am describing does exist, in pieces, across the teams and institutions I have been writing about. What I am trying to do, in my small way, is collect those pieces into something a person could use. If you are working on the same problem, from whatever angle, I would be glad to hear from you. The reading is the work, and the work is easier when we do it alongside each other.
2026References
tk
Image
Yukimasa Okumura. Design for Haruomi Hosono’s album S-F-X. 1984