
Asset managers are excellent at looking backward.
Most firms can tell you precisely what happened yesterday: performance to the basis point, factor contributions, current exposures, fee and ops reporting, all clean and auditable. This is Business Intelligence (BI), the process of understanding historical data.
Where things get fuzzy is when you ask a different class of question:
To answer questions that look to the future, we turn to Decision Intelligence (DI).
For the last two decades, meaningful DI lived almost exclusively inside large quant and multi-strategy hedge funds, and a handful of major banks that had the people, time, and budget to build it. Unsurprisingly, these systems required years of custom engineering.
The interesting shift today isn’t whether DI helps businesses to make better decisions than BI alone (it does). It’s that modern DI is becoming way more accessible, allowing many businesses to get the same benefit that used to only be available to a select few.
Asset management has always been data-driven, but the nature of that data has completely changed. In the early 2000s, data meant access—Reuters terminals, Bloomberg screens, historical databases. Today, the problem is the opposite. Managers are drowning in data—market data, alternative data, sentiment, satellite imagery, ESG metrics.
We used to hunt for data, but now we’re buried in it, and can’t figure out what matters. How do we turn our mounds of data into actionable signals? How do we move from understanding the past to anticipating the future?
That evolution is reflected in how organizations think about their data strategy, and where the real distinction between BI and DI becomes critical.
Business Intelligence is fundamentally retrospective. It collects, processes, and presents historical data to answer questions like what happened, how much did we earn, and where are we underperforming?
For asset management, insights you can derive from BI include performance attribution (which positions drove returns?), risk monitoring (current exposures and VaR), operational reporting (fees, reconciliation, regulatory compliance), client analytics, and valuation history.
Tableau, Power BI, Snowflake, BigQuery, and specialized platforms like Aladdin (BlackRock). For most asset managers, BI infrastructure is well-established.
Everyone. Risk teams, operations, compliance, portfolio managers, and senior management all depend on BI dashboards and reporting. BI adoption is nearly universal because it's operationally necessary.
Decision Intelligence is the practice of combining real-time data, advanced analytics (machine learning, statistical modeling, optimization), and reasoning to generate forward-looking signals and enable adaptive decision-making.
If BI is about understanding the present, DI is about anticipating the future and acting on it systematically.
This isn't new. Large quant managers have been doing this since the 2000s. What's changing is the accessibility and implementation timeline.
DI creates competitive advantage in several ways:
The DI stack includes ML platforms (TensorFlow, PyTorch), managed ML services (Databricks, Snowflake, Google Cloud AI, AWS SageMaker), and real-time data infrastructure (Kafka, Spark). DI requires specialized infrastructure to work at scale.
DI remains concentrated among large managers, hedge funds, and quantitative shops where the business model demands it. Adoption is expanding as specialized platforms emerge that democratize the same tools the big quant funds have.
Quant shops have been using more DI over BI for two decades. The new shift is from "DI as a custom engineering effort" to "DI as operational infrastructure."
Previously, building DI required:
This was fine if you were a hedge fund with unlimited capital and engineering resources. It was prohibitively expensive for everyone else.
Now, the question is: can enterprise-grade DI capabilities be packaged as accessible platforms?
For many asset managers, the answer matters more than the technology itself. You don't need DI to be new. You need it to be something you can build in months instead of years, run with a handful of people instead of 50 engineers, and have it sit on the Snowflake platform you already pay for. The hard part is still encoding your actual portfolio rules.
Additionally, an emerging consideration for DI is how to integrate large language models (LLMs). While LLMs can be useful for processing alternative data (e.g., earnings transcripts, market commentary, and regulatory filings), AI-driven decisions also require managing hallucination risk and possible regulatory scrutiny.
This is where the operational piece becomes critical. DI for asset managers requires not only standard ML and analytics tools, but it also requires the ability to model complex relationships, constraints, and objectives, then continuously optimize decisions as new information arrives.
This used to mean building separate systems; ML platforms for signal generation, optimization engines for prescriptive analytics, custom code to integrate them, data pipelines to move data between layers. Each integration point adds complexity and latency, and requires specialized engineering, especially since historically these systems were disjoint from data infrastructure.
RelationalAI consolidates this stack. At its core is a relational knowledge graph and constraint-based reasoning system that operates as a native app inside your existing Snowflake data cloud.
Why this matters operationally for asset managers:
1. No Data Movement. Your portfolio data, market data, constraints, and execution rules live in Snowflake. RelationalAI operates on that data in place, meaning no ETL, no data silos, and no additional infrastructure. This is critical, as asset managers with Snowflake don’t need to migrate years of data architecture.
2. Unified Prediction and Optimization. Rather than building ML models that output predictions, then feeding those into a separate optimization engine, RelationalAI brings both together natively. You define your portfolio constraints (regulatory limits, liquidity bounds, tracking error targets), your execution constraints (transaction costs, market impact), and your forward-looking objectives. The system reasons over all of this together—GNNs understand the structure of relationships between assets, solvers recommend optimal decisions given constraints and predictions. This all occurs within one platform, operating with a common semantic understanding and language.
3. Business Logic at the Data Layer. You encode portfolio rules, rebalancing logic, hedging strategies, and risk thresholds as part of the graph structure. When market conditions change, your constraints and logic automatically apply.
4. Speed to Implementation. Building a DI system for portfolio management used to require years of engineering, model development, optimization, integration, and testing. RelationalAI cuts the timeline from 18 months to 90 days because the infrastructure is already there. You're not building the platform; you're encoding your specific portfolio semantics and constraints on top of it.
For asset managers, this translates to: building dynamic risk management, adaptive allocation, optimized execution, and real-time signal integration in a fraction of the time with a lean team.
I have spent nearly two decades in asset management, across quantitative research, portfolio management, trading models, valuation frameworks, asset allocation, and risk management. In that time, I watched DI evolve from "this is what we have to build to be competitive" to "this is essential for asset management."
In the early 2000s, we built trading models on historical data. By 2008, we learned that historical correlations collapse when they matter most. We added stress tests, measured risk more carefully, and gradually shifted toward real-time data ingestion and ML that spits out a usable signal you can plug into an optimizer.
The payoff was real, with better signal generation, faster adaptation, and more efficient execution. But building this infrastructure required years of engineering effort and teams that understood both finance and technology. For most asset managers, it remained prohibitively expensive.
What I learned from my experience is that DI works. It gives you an edge. But the barrier has never been whether the technology is possible, it's always been about the cost and complexity of implementation.
Looking ahead:
Large quantitative shops will continue building custom DI infrastructure because they need specialized competitive advantage. But the landscape is shifting. Enterprise-grade DI infrastructure, platforms like RelationalAI that operate natively in data clouds like Snowflake, means smaller and mid-market managers can now access capabilities that previously required massive engineering teams.
This doesn't commoditize DI. Edge in asset management comes from how you use these tools, not that you have them. But it does lower the barrier. The differentiation moves from "can we afford to build this?" to "can we encode our unique portfolio logic and constraints better than competitors?"
For the industry, this means the gap between DI haves and have-nots narrows. It also means that regulatory scrutiny around AI-driven decisions will intensify, with explainability and risk management becoming table stakes.
The asset managers who build robust DI capabilities while maintaining BI rigor will thrive. DI won’t be differentiated, your encoded portfolio logic will be.