by Truthifi
by Kate @ Truthifi

Summary: AI is dismantling the moat of financial data access, leaving room for improvement in accuracy. Perplexity, Plaid and Replit are demonstrating the potential of AI sitting on top of finances, but longer term, tokenization threatens to make aggregation unnecessary entirely. Three things are holding the personal CFO back: data accuracy, consumer trust, and a regulatory vacuum where nobody knows who's liable when AI gives bad financial advice. We need infrastructure that makes financial AI outputs trustworthy. The technology is ready. The question is, who builds the foundation that makes it safe enough for regular people, not just the technically adventurous ones?
The bundle was never really about the investments.
My friends know I didn't start out in financial services. I thought I'd be writing and drawing comics. Then freelance illustration and graphic design work led me to UX design, and here I am a few years later doing product design for a fintech. I'm still playing catchup with the complicated history of this industry, but here's one thing I know: for the last few decades, if you wanted a coherent view of your financial life, you paid someone for it.
A financial advisor, private bank, wealth management platform, portfolio tracking service, budgeting app… Everyone has curated mixes of holdings in different places, and until recently, either a human being or a dashboard was required in order to see the big picture.
Because of the way financial data is siloed by each provider, holistic financial intelligence currently requires either an intermediary app (like Mint, Monarch, Truthifi, or most recently, Perplexity) integrated with an aggregator (like Plaid, Yodlee, or By All Accounts) to port in the data, or a lot of time in a spreadsheet. That's a lot of technology built to connect siloed data.
And for a long time that was a feature of the industry, not a glitch. Because investment products aren't just that—they're data-as-a-service, wrapped in advice, charged as management fees.
That means when a wirehouse puts AI on top of its own data layer, as many are attempting to do, it'll only ever see the slice of your life it holds. It won't know what to make of the rest. The information asymmetry that made each provider's bundle valuable is the same thing that will make their internal AI incomplete.
So if the value of the bundle decreases as data access increases, what happens to the bundle (and the bundlers)? With so many shifts happening at once, it's kind of hard to see the forest for the trees.
AI might accomplish what trackers (and 1033) couldn't.
First, let's cover the state of data access in personal finance.
There was supposed to be a better solution for hard-to-access financial data: open banking. Section 1033 of Dodd-Frank would have required banks to share consumer-permissioned data via standardized APIs starting on April 1 of this year; no more credential sharing, no more screen scraping, just clean direct feeds that any authorized app could access as a matter of regulated right. In the near term, that would've helped Plaid, giving it stable bank-sanctioned data instead of workarounds. But 1033 implementation is now stalled alongside the broader Consumer Financial Protection Bureau (CFPB) pullback. In spite of this, AI companies are moving full speed ahead.
In March, Perplexity launched a Portfolio feature letting users connect their brokerage accounts through Plaid so they could ask questions about their investments in plain English. This is a huge leap towards what Simon Taylor (the first person I heard mention it) calls the ai-driven "personal CFO." Six weeks later, the partnership expanded to include checking, savings, credit cards, and loans: your entire financial life, queryable in one place.
That's not all—last week, Plaid announced a native connector inside Replit, the AI-powered builder platform, making it possible to build a custom financial app quickly using your own live account data. Not a template someone else designed, but their own, tailored to their actual questions.
The tools that institutional wealth management spent decades building—portfolio tracking, spending analysis, net worth dashboards, performance attribution—can now be assembled in a browser window, for free, in an afternoon. AI is quickly changing the way regular consumers will pool and analyze their financial data, though how accurate the output is is still an open question.
Plaid's CEO called it "a paradigm shift in financial services." I think that's right, and I think the firms selling signature bundles of investments know it, too. Data access is only going to get easier.
This is the near-term disruption; the more interesting question is what it points toward.
What do other shifts mean for all the players?
Quick recap: right now, aggregators, institutions, advisors, and AI all need each other. Will they always?
The tokenization happening right now is largely supply-side: firm to firm, institutional infrastructure being laid before it ever touches a consumer wallet. BlackRock's tokenized money market fund has crossed $2 billion in AUM. J.P. Morgan has processed over $1.5 trillion in tokenized transactions. In March, the Fed, OCC, and FDIC jointly clarified that tokenized securities should receive the same regulatory treatment as their traditional counterparts. None of that is in your Fidelity account yet, but it signals that the legal and technical scaffolding is being built, and these things tend to move downstream faster than expected once the institutional layer is established.
The consumer version of that world, where your assets live in a wallet you control, their history and composition on-chain and readable by anything you authorize, is further off. But when it arrives, the aggregation layer becomes unnecessary by design. No translation layer introducing error, no schema built for 1990s recordkeeping mangling your data before an AI can see it. The data is clean because the asset itself is the data.
Here's how each player might be affected.
Aggregators. Plaid's moves with Perplexity and Replit are groundbreaking. But its current model depends on financial data being siloed in the first place. It exists to bridge institutions that don't talk to each other. In a world where that fragmentation dissolves, what happens to the bridge? I'm sure they're thinking about this, too.
Institutions & advisors. BlackRock and J.P. Morgan are preparing for tokenization, but not necessarily what it (and wider transparency) means for the value of their bundles. They'll need a way to continue proving the value of their "edge" in a world of total transparency.
AI companies. Right now, AI needs connections to wherever its customers' assets are held, but that may get easier and someday no longer require aggregation in its current form. Looking forward, it'll need a stronger data normalization layer that enables trusted money movement.
The last bullet is where I see AI-ready data infrastructure like Truthifi's fitting into the transition—as what makes movement possible (and defensible) between investment products. If people are going to be moving assets around based on what AI tells them, they need to be sure that the advice they receive is based on accurate data and non-hallucinatory calculations. So, for that matter, do FINRA and the SEC. (I'm obviously not a disinterested party here, but I'd make this argument even if I weren't building it, because the problem is real regardless of who solves it.)
Because while aggregators like Plaid do the connectivity work, getting from raw aggregated data to something an AI can actually reason over correctly requires a different layer: one that understands financial semantics well enough to know that a dividend reinvestment isn't a purchase, that a gap in history needs reconstruction rather than acceptance, that a collective investment trust isn't a mutual fund… That domain knowledge is what makes AI financial guidance trustworthy rather than just confident. It's what bridges where we are now to where we're going.
Regardless, those sitting at each toll booth along the current data supply chain can see the changes coming. The question is: how fast?
The trust problem is also getting more complicated.
There's a blocker that predates all of this stuff, and it's the one that has historically slowed everything down: most people don't want to hand their financial credentials to a third party. I heard this all the time in the early days of Truthifi, where we've done all we can to show the robust privacy and security measures that not only we undertake—but also aggregators like Plaid—to repair and protect consumer financial data.
The fear isn't irrational. Considering the number of times each year that I receive an email about a data breach at this or that company that has put my information at risk, I understand people's hesitation here, especially if it potentially involves access to their life savings.
With AI, the trust picture gets a bit more complicated. On one hand, fears surrounding AI seem to run the gamut from data privacy to environmental to ethical. On the other, an interactive AI layer on top of financial data may provide a clearer value prop than a metrics-laden dashboard. For example, if you connect your accounts and an AI tells you, in plain language, that you've been double-paying for a subscription, or that your spending in one category drifted 30% over six months, or that your asset allocation no longer matches your risk tolerance, the benefit is immediate. At least for some people, the trust calculus may change.
It might make a difference to the "non-engaged" crowd, or it might not. A lot of the people who dropped Mint or never open their brokerage app weren't just disinterested, but avoidant. Those people still need a solution.
But even if consumer trust does catch up, there's still an even harder problem to solve.
That brings us to the regulatory vacuum.
Right now, if an AI tells you something wrong about your portfolio and you act on it, there's no clear answer for who's responsible. FINRA flagged last December that member firms' use of generative AI was outpacing their internal controls. The SEC made third-party AI data handling an exam priority for 2025. Most AI products are careful to call themselves "information" rather than "advice" for exactly this reason.
Until liability is clearer, truly high-stakes AI financial guidance (not just insights, but recommendations that inform real decisions) will stay in a gray zone. And the CFPB has been significantly scaled back since early 2025, with prior AI-related guidance on algorithmic decision-making revoked and enforcement activity sharply reduced.
The gap isn't going unfilled entirely: states are stepping in, with New York, New Jersey, and Utah among those moving to extend consumer protections into AI-driven financial services. But patchwork state law is a slower substitute for federal standards, and leaves a long window where AI products can operate in the space between "tool" and "advisor" without clear accountability.
This matters beyond the US, too. The EU's AI Act is bringing some structure to algorithmic financial services in Europe, but cross-border AI products—which most of these will be—will increasingly find themselves navigating inconsistent standards across jurisdictions, or optimizing for the least restrictive ones.
So the trust that consumers need in order to adopt AI financial management at scale isn't just about privacy architecture and game-changing, valuable output. It's about knowing that someone is watching, and that there are consequences when things go wrong.
The technology is ready, or close to it. The infrastructure is being built. The consumer appetite is there. I think the accountability layer is the missing piece.
What does all of this mean?
To summarize:
Firms and advisors built their moats around proprietary bundles and being the only coherent interpreter of (at least part of) your financial life.
AI won't eliminate firms and advisors, but it will hand their clients a way to see around them. If it's no longer difficult to get a neutral interface that queries a complete financial picture across all institutions, they'll need to consider how to keep proving out their "edge."
Eventually, AI + on-chain assets that carry their own history and need no intermediary at all may force aggregators to pivot and incorporate other features.
We're getting closer to the personal CFO Simon Taylor talks about—the one regular people trust to provide them with a genuinely complete, genuinely reliable financial picture, and something smart enough to help them do something with it—but the trust and data layers need help first to make sure it can scale.
Trust will come slowly through products that don't break it and regulation, built carefully enough to give both consumers and institutions confidence in what they're signing up for.
The question isn't whether this shift happens, but how cleanly, how quickly, and who builds the infrastructure that makes it safe enough to actually work for regular people—not just the technically adventurous ones. That last part is what I'm most interested in.
Kate is Head of Design & Marketing at Truthifi. Truthifi Connect is a secure MCP server that normalizes and repairs financial data before your AI ever touches it. truthifi-connect.ai
If you're a fintech or advisor building on top of aggregated data and want to talk about what a cleaner data layer could mean for your AI roadmap, we'd love to hear from you: kate@truthifi.com