Tacavar
2026-04-13

Two Free Macro Signals We’d Track Before Paying for Another Trading Dashboard

Some of the best crypto signals on earth are still free.

That is not a contrarian slogan. It is a practical operating principle Tacavar uses when deciding what belongs in a research stack and what belongs in a sales deck. If a founder, PM, or discretionary trader reaches for another paid dashboard before exhausting the highest-signal public data, they are usually buying presentation, not edge.

At Tacavar, the better question is narrower: which public inputs consistently move decision quality per unit of engineering time? Two keep clearing that bar. One captures attention before narrative hardens. The other captures liquidity before markets fully price it. Together, they cover an unusually large share of what matters for crypto macro signals and bitcoin leading indicators.

Why paid sentiment dashboards are often the wrong starting point

Most paid sentiment products solve the wrong problem first. They aggregate feeds, label mood, and render the result in a polished UI. That can be useful later. It is rarely the best starting point for a founder/operator trying to build an actual edge.

Tacavar has found that paid dashboards often fail in three ways. First, they abstract away source mechanics, which means the user cannot judge whether a signal is early, stale, or merely correlated. Second, they bundle commodity inputs behind expensive packaging. Third, they train teams to consume outputs rather than engineer hypotheses.

For Tacavar, that matters because signal work is not a screenshot business. It is a pipeline business. If the underlying data can be pulled directly, normalized cleanly, and joined with price and market structure data inside one system, paying a vendor too early creates dependency without improving the signal itself.

That is especially true in crypto. Many so-called free trading signals sold into the market are just reformatted public series with better branding. A Tacavar-style operator should ask: does this dashboard give us a proprietary dataset, a superior transformation, or faster access than we can build ourselves? If the answer is no, the dashboard is probably a convenience purchase, not a research advantage.

The mistake is not buying tools. Tacavar buys tools where the bottleneck is real. The mistake is buying tools before proving that the raw public inputs are insufficient. In practice, that proof bar is much higher than most teams assume.

Wikipedia pageviews as a leading curiosity signal

Tacavar’s view is simple: before capital moves at scale, curiosity often moves first. One of the cleanest public proxies for that curiosity is Wikipedia pageviews.

The Wikipedia Pageviews REST API is free, effectively frictionless, and structurally different from most retail-facing sentiment feeds. Instead of measuring what traders say, it measures what large populations decide to look up. That distinction matters. Search and lookup behavior often begins upstream of public narrative consensus.

In Tacavar’s signal research, a focused article set matters more than broad coverage. The useful approach is not “track all of Wikipedia.” It is “track the right pages with domain intent.” Tacavar built a curated ingestion set around articles such as Bitcoin, Ethereum, Jerome Powell, Donald Trump, ChatGPT, and Anthropic, then routed the resulting daily series into a unified signals table. That design keeps the signal interpretable. When Bitcoin pageviews spike, Tacavar knows what changed. When Jerome Powell or Treasury-adjacent topics spike, Tacavar can tie that curiosity wave back into macro framing.

The reason wikipedia pageviews trading works is lead time. Attention often shows up there hours before TV segments, social amplification, or fully formed market narratives. By the time a paid sentiment dashboard tells users that a topic is “trending,” the curiosity impulse has usually already occurred. Tacavar would rather capture the first derivative of attention than the polished summary of yesterday’s story.

This is also one of the rare signals that retains usefulness precisely because it looks too simple. Most market participants are trained to ignore data that does not arrive in institutional packaging. Tacavar treats that as an advantage. If billions of human curiosity decisions are visible through a free API and almost nobody operationalizes them, that is not amateurish data. It is under-harvested data.

For founder/operators, the lesson is not “Wikipedia predicts price directly.” Tacavar would not make that claim. The lesson is that curiosity spikes can identify narrative regime change early enough to improve positioning, reduce surprise, and prioritize what deserves deeper investigation.

FRED net liquidity as the highest alpha-per-effort macro input

If Wikipedia pageviews capture curiosity, FRED net liquidity captures the macro backdrop that decides whether risk can actually travel.

Tacavar rates FRED net liquidity as the highest alpha-per-effort public macro input in the crypto stack. The reason is not mystical. It is because the signal is grounded in variables that directly shape financial conditions, and all of the required components are available through the Federal Reserve’s own public infrastructure.

The working Tacavar formula is straightforward: DFF + WALCL - RRPONTSYD*1000 - WTREGEN.

That rolls together the Fed funds rate, Fed balance sheet, reverse repo usage, and the Treasury General Account into a single series that approximates net liquidity available to risk assets. Macro funds routinely sell dressed-up versions of this framework. Tacavar’s position is that the packaging is not the edge. The disciplined ingestion, normalization, and interpretation are the edge.

Why does this matter for crypto macro signals? Because Bitcoin and other high-beta digital assets are not trading in a vacuum. They are downstream of liquidity conditions. When liquidity improves, crypto often responds with a lag that is actionable. When liquidity tightens, the market can remain superficially strong for a period before the constraint becomes obvious in price.

That is why Tacavar treats FRED net liquidity as one of the most important bitcoin leading indicators in the research stack. Not because it explains every move, but because it forces the right question: is this market being supported by improving liquidity, or are traders extrapolating strength into a deteriorating macro backdrop?

The implementation burden is almost offensively low. With a free FRED API key and a lightweight ingestor, Tacavar can refresh the series on a daily cron, land it in Postgres, and join it against returns, volatility, and other regime features. Compared with many expensive macro products, the information-to-effort ratio is unusually high.

How to normalize noisy public data into usable trading signals

Public data is not automatically useful. Tacavar’s advantage does not come from finding APIs. It comes from converting messy inputs into comparable signals that can survive real decision-making.

The first principle is robust normalization. With Wikipedia pageviews, raw counts are noisy and occasionally distorted by viral outliers. Tacavar uses median absolute deviation z-scoring rather than naive mean-based normalization because MAD handles spike behavior without allowing one extraordinary event to corrupt the baseline. That matters if the goal is to detect abnormal curiosity instead of merely counting traffic.

The second principle is curated scope. More inputs do not automatically produce more information. Tacavar keeps article coverage intentional, because a smaller set of high-context series tends to outperform a sprawling taxonomy of weakly interpreted ones. The same is true for FRED net liquidity. The insight is not improved by adding every macro series available. It is improved by preserving conceptual coherence.

The third principle is alignment of frequency and use case. Tacavar treats daily public data as regime and attention information, not as a substitute for intraday execution data. That distinction prevents category errors. A daily liquidity series should influence directional bias, sizing, and patience. A curiosity spike series should influence investigation priority and risk awareness. Neither should pretend to be a one-click trade trigger.

The fourth principle is unification. Tacavar routes these datasets into one signals pipeline and one storage model so they can be compared against price, volatility, and other internal features using the same time conventions and metadata standards. Without that step, teams end up with disconnected charts instead of actual signal engineering.

Where free signals fit inside a modern trading research stack

Tacavar does not argue that free signals are sufficient by themselves. The argument is that they should sit earlier in the stack than many teams place them.

A modern Tacavar-style research stack starts with public structural inputs, then layers on market-native data, then adds proprietary transformations, and only then considers paid enrichment where it truly changes coverage or speed. In that order, free trading signals are not a backup plan. They are the base layer for hypothesis generation and regime detection.

Wikipedia pageviews belong in the narrative-discovery tier. They help Tacavar detect when a subject is entering the collective attention field faster than conventional commentary surfaces it. FRED net liquidity belongs in the macro-regime tier. It helps Tacavar frame whether the broad environment is supporting or resisting risk.

From there, Tacavar can decide what deserves more expensive analysis. If a curiosity spike appears in a topic that also aligns with improving liquidity conditions, that deserves escalation. If pageviews surge while liquidity deteriorates, Tacavar interprets the setup differently. The free signals do not replace judgment. They shape where judgment should focus.

For founder/operators, this ordering matters because it protects capital and engineering time. It is easier to justify paid tooling after public data has already identified a recurring blind spot. It is much harder to justify starting with expensive software that never had to beat the simplest usable alternative.

What Tacavar learned from building a unified signals pipeline

Tacavar learned that edge compounds when inputs share a common pipeline, not when they live in separate notebooks and bookmarked dashboards.

The practical breakthrough was not merely discovering that Wikipedia pageviews and FRED net liquidity were available. It was operationalizing them inside one signals architecture with consistent ingestion, storage, normalization, and downstream query patterns. Once Tacavar treated public attention and macro liquidity as first-class inputs rather than side experiments, they became easier to compare, audit, and extend.

That changed how Tacavar evaluates new ideas. The bar is now explicit: can a proposed signal be ingested reliably, normalized defensibly, and interpreted in context with existing crypto macro signals? If yes, it earns pipeline space. If not, it is probably just content.

Tacavar also learned that simplicity is often mispriced. Since Josh Fathi founded Tacavar in 1987, the durable operating lesson has remained consistent: foundational advantage usually comes from disciplined systems thinking, not from paying the highest invoice in the stack. In signal engineering, that means the strongest move is often to start with public data that is structurally early, economically meaningful, and operationally cheap.

That is the real case for these two inputs. Wikipedia pageviews provide a free read on emergent curiosity. FRED net liquidity provides a free read on macro oxygen. Put them in one Tacavar pipeline, normalize them properly, and they become more than interesting charts. They become usable priors.

Explore Tacavar’s trading research and signal-engineering capability at tacavar.com.