Yesterday, OpenAI quietly rolled out a feature that lets ChatGPT connect to your bank account. Through a partnership with Plaid, users can now link over 12,000 financial institutions — Chase, Fidelity, Robinhood, Capital One, the lot — and have an AI analyse their spending, subscriptions, portfolio performance, and upcoming payments.
Read that again. An AI model you don't control, running on infrastructure you don't own, now has access to your financial data.
And the thing is — the feature is genuinely useful. That's exactly what makes it dangerous.
The convenience trap
Every significant erosion of privacy in the last twenty years arrived dressed as convenience. Google reading your email to serve better ads. Alexa listening in your living room to save you a trip to the light switch. Your phone tracking every location so it could show you a cute "Year in Review."
We don't give up privacy because someone demands it. We give it up because someone makes the alternative slightly more annoying than the surrender.
The most dangerous AI feature isn't the one that scares you. It's the one that's so useful you forget to ask what it costs.
OpenAI knows this. They're not stupid. ChatGPT already handles 200 million financial questions a month from people typing things like "why did my spending go up this month?" The data was always the gap. Now they've closed it.
What OpenAI says vs. what matters
To their credit, OpenAI built some safeguards. You can disconnect accounts at any time. Disconnected data gets deleted within 30 days. You can review and delete financial memories the chatbot stores.
That's decent. Better than most. But it still puts the burden on you to manage permissions you probably won't remember granting six months from now.
Here's what the press release doesn't say: once your financial patterns are in the model's context window, they shape every future interaction. The AI doesn't just answer your question about budgeting. It understands your risk tolerance, your spending habits, your income patterns, your financial anxiety. That context doesn't disappear when you close the tab.
Guardrails aren't about saying no to AI. They're about saying yes — with conditions.
The entrepreneur's version of this problem
If you're running a business, this announcement should make you think about your own AI stack. Not because OpenAI is evil — they're building what users want. But because the same logic that makes connecting a personal bank account feel reasonable will, within months, make connecting your business accounts feel reasonable too.
And then your customer data. And your CRM. And your internal communications.
Each individual connection makes sense. The accumulated exposure is the problem.
I've been running my business on AI for three years. Every system, every workflow, every agent I deploy has guardrails built in from the start. Not because I don't trust the technology — I trust it enormously — but because architecture is what turns trust from faith into engineering.
A framework for AI access
If you're thinking about connecting AI to anything sensitive — financial data, customer records, internal docs — here's how I think about it:
Principle of least access. Give the AI exactly what it needs to do the job. Not your entire bank history — the specific data points it requires. If it needs to categorise expenses, it doesn't need your account number.
Expiry by default. Every permission should have a sunset date. Not "until I revoke it" — that's how you end up with 47 connected apps you forgot about. Set it to 90 days and force a re-authentication.
Human checkpoints on actions. Reading data is one thing. Acting on it is another. The moment AI moves from "show me my spending" to "cancel this subscription" or "transfer this money," a human should be in the loop. Every time. No exceptions.
Separation of contexts. Your business AI should not have access to your personal finances. Your personal AI should not have access to your business data. The temptation to connect everything into one "super-assistant" is real. Resist it. Compartmentalisation is a feature, not a limitation.
Architecture is what turns trust from faith into engineering.
The normalisation problem
What concerns me most isn't this specific feature. It's the trajectory. Twelve months ago, the idea of connecting your bank account to an AI chatbot would have felt insane. Today it launched to applause. Six months from now it'll be table stakes.
That's how normalisation works. Not with a bang, but with incremental consent. Each step feels small enough to accept. The accumulated distance is what catches you.
I'm not saying don't use it. I'm saying know what you're agreeing to. Understand the trade. Build the guardrails before you connect the pipe, not after something goes wrong.
OpenAI also quietly mentioned they're planning Intuit integration next — estimating tax impacts from stock sales, checking credit card approval odds. That's your entire financial life in one model's context. Think about whether you want that. Really think about it.
The bottom line
The companies — and the individuals — who come through the AI revolution intact won't be the ones who said no to everything. They'll be the ones who said yes to the right things, with the right constraints, reviewed on the right cadence.
Architecture beats ambition. Every single time.
The AI revolution won't be lost to the people who said no. It'll be lost to the people who said yes to everything without reading the terms.
OpenAI connecting to your bank account isn't a crisis. It's a signal. The signal says: if you don't set your own guardrails, someone else will set the defaults for you. And their incentives are not your incentives.
Set the guardrails. Then go build.