Security Fundamentals Course Intro
Introduction to the security fundamentals series - why this exists and what we'll cover.
- 1 After getting hacked, the most common security mistakes became clear - especially in apps built quickly with AI
- 2 These aren't comprehensive security rules - they're the most common issues seen in real apps right now
- 1 Misconfigured RLS (Row Level Security) is behind almost every vibe-coded app data breach you've seen in the news
- 2 Firebase had to change their default policy because so many apps shipped with security rules completely off
- 3 The most common RLS vulnerability: storing subscription status and rate limits on the same user table that users can edit
- 4 Don't just ask Claude to 'check my RLS' - prompt it with specific attack scenarios like 'can users bypass their subscription status?'
- 5 Use the Supabase or Firebase MCPs with Claude Code and Cursor so they can audit configurations directly
- 1 Frontend rate limits are useless - anyone can find your backend endpoint in the network tab and bypass them
- 2 Implement per-user rate limits on the backend, but don't store the limits on the same table users can edit
- 3 Add IP-based rate limiting as a second layer - even if someone creates a million accounts, the IP limiter stops them
- 4 Even without AI features, add rate limits - someone can spam your Supabase or Firebase and rack up a $50,000 usage bill
- 1 Never call AI providers, Stripe, email services, or cloud storage directly from the frontend - your keys are exposed
- 2 Use Supabase Functions or Firebase Functions if you don't have a backend set up
- 3 Environment variables on the frontend are NOT secure - they're only safe on the backend
- 4 Even mobile apps aren't safe - it's very easy to intercept network requests from native apps
- 1 Always set budget caps on every provider - it's better for your app to go down briefly than to wake up to a $10,000 bill
- 2 A leaked AWS key without a budget cap led to a $30,000 SageMaker bill - reduced to $2,000 after negotiation
- 3 If a service doesn't support budget caps, set up alerts and use workarounds like calling the billing API to shut off spend
- 1 Vibe coding with AI is actually more secure than hand-coding - LLMs catch edge cases humans miss and don't get fatigued
- 2 The key is being deliberate about security - have back-and-forth conversations with Claude about attack scenarios
- 3 Use the free markdown security checklist to audit your codebase with Claude Code or Cursor
- 1 This series barely scratches the surface - think like an attacker and try to break your own app
- 2 Install the Supabase or Firebase MCP, grab the security checklist, and let Claude audit your codebase
Why Security Fundamentals Matter
A couple weeks ago, Chris’s app got hacked. The response was overwhelming - developers wanted to know how to protect their apps, especially those built quickly with AI tools. This series covers the most common security mistakes happening right now in apps built with Supabase, Firebase, and AI-assisted coding tools.
What This Series Covers
This isn’t a comprehensive security course - that would take hundreds of videos. Instead, it focuses on the specific, common mistakes that keep showing up in real apps: misconfigured RLS, missing rate limits, exposed API keys, and no budget caps. Every mistake covered here is one that was personally experienced and dealt with.
The RLS Problem
Row Level Security misconfiguration is the number one security issue in vibe-coded apps. Almost every data breach headline you’ve seen from a vibe-coded app traces back to this. The core issue: services like Supabase and Firebase let your frontend talk directly to the database, and RLS is the only thing standing between a user and everyone else’s data.
The Subscription Status Trap
The most dangerous pattern is storing subscription status and rate limits on the same table that users can edit. Even with “correct” RLS that limits users to their own rows, they can upgrade themselves to premium and remove their rate limits. Claude and Cursor don’t reliably catch this because the RLS itself is technically valid - it’s the data architecture that’s the vulnerability.
How to Actually Audit RLS
Don’t ask Claude to generically “check my RLS.” Instead, prompt it with specific attack scenarios: Can users bypass their subscription? Can they modify rate limits? Can they read another user’s data? Use the Supabase or Firebase MCPs so the AI can audit configurations directly rather than working from screenshots or SQL dumps.
Frontend Rate Limits Are Useless
If your rate limits only exist on the frontend, they don’t exist at all. Anyone can find your backend endpoint in the network tab and bypass whatever limits your UI enforces. This applies to mobile apps too - intercepting network requests from native apps is straightforward.
The Two-Layer Approach
Implement per-user rate limits on the backend: track generations per user and check the count before processing requests. Layer IP-based rate limiting on top of that - even if someone creates a million accounts, the IP limiter catches them. The combination makes abuse prohibitively expensive for attackers.
Keep Sensitive APIs on the Backend
Calling AI providers, Stripe, email services, or cloud storage directly from the frontend exposes your keys to anyone who opens the network tab. This isn’t just about AI - manipulated Stripe calls can change prices, leaked SendGrid keys let attackers send emails as you, and exposed S3 credentials give full read/write access to your storage.
Environment Variables Aren’t Magic
A major misconception: putting API keys in environment variables doesn’t make them secure. Frontend environment variables are exposed in the bundle. They’re only safe on the backend. If you don’t have a backend, use Supabase Functions or Firebase Functions - that’s exactly what they’re designed for.
Set Budget Caps on Everything
Every provider you use should have a budget cap. It’s far better for your app to go down temporarily than to wake up to a five-figure bill. One leaked AWS key without a budget cap led to a $30,000 SageMaker bill - and that was with AWS eventually reducing it to $2,000 as a courtesy.
When Caps Aren’t Available
Some services don’t offer budget caps directly. At minimum, set up spend alerts so you know when costs are spiking. For Firebase, you can set up a function that calls the Google billing API to shut off spend automatically. It’s frustrating that this isn’t built in, but the workaround is worth the effort.
The Honest Answer
Vibe coding and LLM-assisted coding are not inherently insecure. In fact, apps built with AI assistance can be more secure than hand-coded apps. LLMs catch edge cases humans miss, think through scenarios developers wouldn’t consider, and never get fatigued. When you’re coding by hand, security is usually the last thing you think about after hours of building features.
The Key: Be Deliberate
The distinction matters. Blindly shipping AI-generated code without review will always have security issues - just like blindly hand-coding without thinking about security. The secure approach is using AI while being deliberate: have back-and-forth conversations about attack scenarios, prompt injection, and abuse patterns. That combination of AI capability plus human intentionality produces the most secure code.
Take Action Now
This series barely scratches the surface of application security, but it covers the most common and damaging mistakes in AI-built apps right now. The next step is simple: grab the free security checklist, install the Supabase or Firebase MCP, and let Claude audit your codebase. Think like an attacker - try to break your own app before someone else does.
Why this series exists
So, a couple weeks ago, my app got hacked. I made a video about it, shared some security tips, and I was honestly surprised by the response. A ton of you guys reached out and said you wanted to know more about how to protect your apps. So, I decided to make a dedicated video about the most common security mistakes that I'm seeing right now, especially if you're building with AI or you're using something like Supabase or Firebase. A quick disclaimer before we get into it. This is not a comprehensive security video that would take like a 100 videos. These are just the most common things that I'm personally seeing, especially with apps that were quickly made with AI.
About the presenter
If you're new here, welcome to the video. My name is Chris and I build productivity apps. I've been an iOS and React developer for over 10 years and I've made every single mistake that I'm about to talk about in this video. So, I'm not coming at you saying that I'm some amazing developer and I'm better than everyone. I'm telling you this stuff because I've made these mistakes and I've dealt with the consequences. I mean, even just a few weeks ago, if you saw my video, I recently got hacked.
What is RLS?
The number one issue that I've been seeing so far is misconfigured RLS or row level security. If you do any sort of vibe coding with AI or you've used Supabase or Firebase, you have probably heard of RLS. There's also been a ton of drama with vibecoded apps getting hacked, data breaches, all the data leaking. Almost every single headline that you've seen where this has happened is probably because of a misconfigured RLS. The way that apps are supposed to work is you have your front end which could be your application, and then you have a backend and then you have a database which is where all of the user data is being stored. Your front end should never talk directly to the database. It has to go through the backend to do that.
How Firebase and Supabase changed the game
And then came Firebase and Supabase and their whole pitch was you don't need a backend to develop an app. They gave you these client libraries that you install on your app directly on the front end and your app can talk directly to the database securely. This was a very controversial thing because it goes against this whole front-end backend database architecture. Now obviously, this wasn't great from a security standpoint. So their solution was to add something called RLS, or in the case of Firebase, it's called Firebase Security Rules. What RLS does is it acts as a filter. So even though your app is talking directly to the database, RLS limits what it can access. You can set rules like a user can only access their specific data.
The Firebase RLS disaster
Without RLS, someone could hit your database and just download the entire thing. But when RLS is configured correctly, they should only be able to access their own data. But there is a huge problem - if you misconfigure this, it can be very devastating. Firebase actually had a huge issue with this. Previously, when you spun up a Firebase instance, their security rules were completely off. Everything was just open to view and download by default. They did this so developers could move faster. They tried adding a warning, but that wasn't enough. So many apps got hacked that they actually had to change their policy to automatically lock your database after a certain number of days.
The subscription status vulnerability
In my case, for my calorie tracking app, I'm using Supabase and I was confident I configured RLS correctly. I even had Claude and Cursor double check it. But I made a big mistake. I had RLS configured correctly where users can read and write only their own data. But I made the mistake of storing the subscription status and the rate limits on the same table. Which meant that they were able to modify their subscription status to give themselves premium. And even worse, they were able to modify the rate limits and then they could just hit my AI endpoint unlimited and rack up a $10,000 bill.
Auditing real apps
I actually decided to audit a couple people's apps in preparation for this video. Some of the apps were vibe coded by nontechnical people, but a lot of them were actually vibecoded by people with technical experience, and over half of them had the exact same problem. I was able to manipulate the apps to give myself premium access. There was one instance I was able to download a user's entire table so I could see all the data on Supabase. I saw information I was absolutely not supposed to see.
How to fix RLS issues
My number one tip is to double check your RLS configurations. Not just in a general way like asking Claude Code 'hey, check my RLS configuration.' I mean in a very specific way - try to think through specific scenarios of where RLS will fail. Prompt it with very specific things like: can users bypass their subscription status? Can they modify their rate limits? Is there a scenario where a user can read another user's data? You have to be creative and do some critical thinking. I created a free markdown file with common scenarios - feed it into Claude Code and Cursor and ask it to audit your codebase.
Most common RLS vulnerabilities
The most common vulnerability pattern I'm seeing with RLS is storing sensitive data like subscription status or rate limits on the user table itself. Claude Code and Cursor just don't have a problem with you doing this. The second most common pattern is misconfiguration where users can see other people's data. And one more very important thing - if you're using Claude Code and Cursor, please use the Supabase or Firebase MCPs. Even if you're using AWS and Azure, use the CLI and give it to Claude Code and Cursor so it can audit the configurations directly.
Why frontend rate limits don't work
Mistake number two is adding no rate limits. This is a huge one, especially if you have an app that has any sort of AI features. You might be thinking, well, I put limits on my front end, so they can only generate five generations a day. But the problem is, if someone finds your endpoint, they can bypass any limits you put on the front end because they're going directly to the back end. And finding those backend endpoints is not hard at all. Anybody can see this in the network tab. And yes, this does apply to mobile apps.
Per-user and IP-based rate limiting
You need rate limits on the back end because front-end rate limits are not going to cut it. The way that I personally do it is I have per user rate limits. I store the number of generations a user has made on a table and the rate limits for that user also on the table. Every time someone calls the back end, I just check if they're within the limits. But remember mistake number one - make sure not to put the limits on the same table as the user data they're allowed to edit.
IP-based rate limiting as a second layer
Another approach you can use in combination with per user rate limits is to add IP-based rate limiting. Set it up where a single IP address can only hit your backend endpoints a certain number of times per hour, per minute, per week. Even if someone spins up a million accounts and bypasses your user-based rate limiting, the IP-based rate limiter should stop them. Technically they can rotate their IP address with proxies, but at that point it's so costly it's usually not worth it.
Rate limits even without AI
Even if you don't have AI features, it's still good practice to add these limits because you don't want someone to unnecessarily spam your endpoint. If you're using Firebase and Supabase, these are still technically usage-based services. You're getting charged based on reads, writes, and bandwidth. Someone could hit up these services and rack up a huge bill. There are a lot of cases of people waking up to a $50,000 bill. So please add these limits even if you have no AI features.
The frontend API call mistake
Mistake number three is calling AI endpoints or other sensitive API calls directly from the front end. I have seen way too many people do this. I was calling things like Stripe directly from the front end. And yes, it's not just AI endpoints. It could be something sensitive like Stripe - someone can take that endpoint, manipulate prices or subscription tiers before it hits your server. I've also seen people calling email services like SendGrid and Postmark directly from the front end, which means someone can take those endpoints and start sending emails as you.
Cloud storage and AI provider exposure
I've seen people call cloud storage providers like AWS S3 directly from the front end with hard-coded credentials. In some cases, this allows people to download and upload whatever they want to your S3 bucket. And obviously, the big really bad thing is people calling AI providers directly from the front end. If you have a photo generation service calling Vertex AI directly from your front end, someone can intercept your key and use it for whatever they want and rack up a huge bill.
The solution: use the backend
The solution is very simple. Make sure that these calls are being done from your backend, not your front end. If you don't have a backend set up, this is where Supabase Functions and Firebase Functions come in. There's also a huge misconception, especially with mobile development, that if you put your API keys in an environment variable, it is just secure automatically. That is not the case. Most environment variables on all front ends are exposed. Environment variables are only safe on a backend.
Why budget caps matter
Another issue I'm seeing is not adding budget caps. Whatever provider you're using, make sure to add a budget cap so that if you hit that budget cap, the services just shut off completely. It is so much better for your app to go down for a little bit than for you to wake up with a $10,000 bill.
The $30,000 AWS bill
This is kind of embarrassing, but I made a huge mistake a couple years ago where one of my environment variables got leaked - an AWS key that just had too many permissions attached. Absolutely no budget cap either. I woke up to a $30,000 bill where someone took the key and used AWS SageMaker to do machine learning training. I got lucky and the bill was waived to about $2,000. From that day forward, I started taking budget caps and hiding environment variables way more seriously.
Alerts when caps aren't available
Some services don't have budget caps. So at the bare minimum, make sure to have alerts set up where you get notified if you're approaching a budget cap. There are workarounds on services like Firebase where you can set something up to call the Google billing API and shut off spend automatically.
Is vibe coding insecure?
A question I keep getting is, is vibe coding and LLM coding inherently insecure? My honest opinion is no. I would actually argue that the apps where I used AI for coding are more secure than the ones I did a couple years ago by hand. I took Claude Code and Cursor, threw it at some of my old code bases, and there were so many vulnerabilities that it caught. LLMs can see things that a lot of humans miss. It can think through edge cases I wouldn't have thought of. And more importantly, it doesn't get fatigued.
Why AI-assisted coding can be more secure
When I was coding by hand, security was like the last thing I was thinking about. I was so tired just building features that I just didn't have enough time to think about it properly. When you're using AI for coding, not only does the LLM not get fatigued, but I am less fatigued as a developer. I have a lot more of a clear head to think through edge cases and have good conversations about security.
The deliberate approach
When I say vibe coding, I'm not talking about just shipping things without looking. That will always have inherent security issues, same as just coding by hand and not caring. The vibe coding that is more secure is where you know what's going on, you're in control, you can still move super quickly, but you at least care and are deliberate about security. My favorite conversations to have with Claude are the ones where we're deliberately talking about security - 'What about this scenario? What about prompt injection?' It gives me peace of mind when I ship something.
Action items
If you're watching this and thinking you need to check your app - grab the markdown file linked in the description, paste it into Claude Code or Cursor, and let it audit your codebase. Install the Supabase or Firebase MCP if you're using those services. Try to break your own app. Think like someone who wants to abuse your system. That alone will catch the majority of issues.
Wrap-up and next steps
Again, this is not comprehensive. This is barely scratching the surface. This was just the common stuff that I've been seeing recently that I wanted to call out, especially with people who are vibe coding or building really quickly with AI. If you guys have any security tips or there's something that I missed that you really think I should cover in a part two, please leave a comment down below.