Before anything else: I built this for myself. One ticket. To sit in the stands at Chinnaswamy and watch a match. Nothing was resold, nothing was listed anywhere. This system monitors a publicly accessible API endpoint and sends a notification - it doesn't automate purchases or touch any platform safeguard. I wanted my first ever live cricket match, and RCB tickets at face value feel more like a lottery than a purchase.
When RCB's first home game for IPL 2026 went on sale on March 24 at 4pm, fans reported tickets disappearing before they could complete checkout. Social media flooded with complaints within minutes of the window opening. Some people said they got to the payment page, entered their details, and came back to find their seats gone.
This isn't unique to one match or one season. For the , the entire online inventory vanished in under a minute, and within hours the same tickets appeared on resale sites at prices . The official base price for RCB home games at Chinnaswamy under dynamic pricing - RCB is .
Part of the problem is structural. A large chunk of capacity gets pre-allocated to sponsor quotas, BCCI and state association blocks, and club memberships before general sale opens. At Chinnaswamy this is compounded further: the stadium is running at a - down from its normal ~35,000 - following the crowd crush on , where outside the stadium during RCB's title victory celebrations. The Karnataka government cleared the stadium to host matches , a 17-point infrastructure overhaul, and . So you're not competing for a full house - you're competing for 28,000 seats, a meaningful slice of which was already allocated before the public queue opened.
Vibe check + thoughts
How was this read?
No sign-in needed to react — only comments require an account.
Comments
Crickets. Loud ones.
Be the first to say something — good, bad, or unhinged.
Two hours of sleep-deprived coding, a Discord ping at 3:30pm, and tickets in hand by 3:40pm. Here's what that journey actually looked like.
RCB
IPL
TypeScript
Effect
Personal
June 4, 2025
There have also been allegations of irregularities across multiple IPL seasons - suggestions that pre-booking or diversion to secondary markets happens before the public window officially opens. I can't verify that. But the pattern of tickets disappearing in under 60 seconds and reappearing on resale platforms at 10–15x the original price repeats year after year without a satisfying explanation from anyone involved.
The standard approach is sitting at your computer at exactly the right time, refreshing until the page loads, sprinting through checkout, and hoping inventory doesn't evaporate while you're entering your details. I decided it wasn't good enough for me.
I sat down at 4:00am with a rough plan: poll the ticketing endpoint, detect when tickets go live, fire a Discord notification fast enough to actually act on it. Simple in theory. In practice, I spent the first 90 minutes completely wrong about how the API worked.
My first assumption was that the API required authentication. This seemed reasonable. Ticketing platforms have obvious incentives to block automated access. They'd gate their endpoints. So I opened the browser, loaded the ticketing site, and started inspecting network requests in the developer tools.
I saw cookies. I saw a Bearer token in one of the authorization headers. My brain latched onto those and I started building around them.
I tried passing session cookies with my requests. Spent time figuring out which ones were relevant, copied them across, tried attaching them to fetch calls. Some requests came back with data, some didn't, none of it was reliable. Cookies from a browser session tied to a specific login don't carry cleanly to an independent script - the server-side session validation behaves inconsistently when the same session appears from a different network context.
More importantly, I eventually realized the cookies I was chasing were completely unrelated to the ticketing API. They belonged to other services embedded on the page. I'd copied everything visible in the network tab without thinking carefully about which specific request actually mattered.
The Bearer token felt more promising. I found it in the authorization header on a specific API call, copied it into my script, and the requests worked. For a while.
Bearer tokens expire. This one lasted somewhere between 30 and 60 minutes before requests started failing with auth errors. Re-generating it meant logging in again, which required an OTP sent to a phone number. There was no way to automate that step. The token approach was dead on arrival for anything meant to run unattended overnight.
I also noticed something odd while debugging: the token still worked briefly after I logged out in the browser. That told me it wasn't tightly bound to a live session - it was a time-expiring credential, not a session-bound one. Useful to understand, useless for my purposes.
My third wrong turn was building a CORS proxy. Browser requests to the API were hitting CORS restrictions, so I'd route my fetch calls through a Cloudflare Worker that would strip those headers and forward the request server-side.
I got the Worker running. Sent a request through it. Got a 403.
CORS is a browser security mechanism enforced by the browser, not the server. A proxy doesn't bypass server-side security - it moves the request to a different origin. If the server is blocking requests based on IP ranges or TLS fingerprints, routing through a Cloudflare Worker makes things worse, because Cloudflare's datacenter IPs are well-known and actively flagged by services that filter non-browser traffic.
By this point I'd read enough about request filtering to try full header spoofing. I copied every header from a real browser request: sec-ch-ua, sec-fetch-site, sec-fetch-mode, sec-fetch-dest, the full User-Agent string, accept-language, all of it pasted in.
Requests that had been going through fine started failing more consistently.
The reason, which I pieced together after, is that fingerprinting often happens at the TLS layer, not the HTTP layer. When a browser initiates a TLS handshake, the specific cipher suites it advertises, the order it lists them, and the extensions it includes form a fingerprint identifying it as Chrome on Windows. A script running in Bun makes a completely different TLS handshake even if the HTTP headers are identical. You can copy every header perfectly and still be identifiable because the TLS fingerprint doesn't match. Sending more headers doesn't help - it increases the mismatch signal by making the request look like something trying too hard to look like a browser.
Minimal headers worked better. I settled on four: accept, accept-language, user-agent, and origin with the expected referer.
Around 5:30am, after an hour and a half of chasing dead ends, I stepped back and tried the thing I should have tried first: fetch the endpoint with minimal headers and see what comes back.
It came back fine. No auth required. A clean JSON response with a list of events, each carrying details about the match, date, price range, and a field called event_Button_Text.
When tickets aren't available yet, that field says something like "COMING SOON". When they go live, it says "BUY TICKETS".
That's it. I'd spent 90 minutes building authentication systems for an endpoint that needed no authentication. The public event listing API exists to serve the website's front page - of course it's accessible without credentials. It has to be. That's its job.
The detection logic ended up being a six-line filter:
const getValidEvents = (events: EventData[]) => { const now = nowIST().getTime(); return events.filter( (e) => new Date(e.event_Date).getTime() >= now && e.event_Button_Text === "BUY TICKETS" );};
Future events only, button text matching exactly. Everything else is infrastructure around this function.
With detection figured out, I wrote the rest in about 30 minutes.
I built it on Effect, a TypeScript library for typed, composable async programs. I've been using it on other projects and it fits how I think about programs that need to run continuously, handle errors without crashing, and do multiple things at once. For something where a failed fetch shouldn't kill the whole process and retries need sensible backoff, Effect handles the plumbing without me wiring it manually.
The polling loop runs on two intervals depending on what it finds. If nothing changed since the last check, it backs off to a slower interval with random jitter - somewhere between 10 and 15 seconds. If it detects a change, it drops to a faster interval of 3 seconds.
The jitter matters because a perfectly regular polling interval looks mechanical and is more likely to trigger rate limiting from a server watching for patterns. Randomizing the slow interval makes the request cadence look more like a human periodically refreshing a page.
Before filtering for available events, the system compares the full response against the previous one using a simple hash. If nothing changed, it skips the notification logic entirely. This avoids re-alerting on events that were already live from a previous check.
On top of that, there's a 1-hour cooldown per event code tracked in a Map. Even if the change detector flags a difference, it won't notify for an event it already notified about within the last hour. This handles cases where the response changes for some other reason while the same event is still live.
If the endpoint is temporarily unreachable, the system waits progressively longer between each retry rather than hammering it. After 5 failed retries it surfaces the error and the outer loop handles it.
I ran two loops concurrently using Effect's fiber model: the main polling loop and a heartbeat that pings Discord every 2 hours to confirm the process is still alive.
A message in Discord every 2 hours means I know it's running. Without that, a silent crash overnight means finding out after the sale window has already closed.
My first notification target was Telegram. Delivery worked technically, but latency was inconsistent and I hit rate limits during testing. Discord webhooks were faster - consistently instant in every test - and supported role mentions, which meant the notification would actually push to my phone rather than sitting in a channel I might not have open.
I tried deploying this to a cloud hosting provider expecting it to run more reliably than leaving my laptop on overnight. Within a few minutes, every request came back with 403 Forbidden.
Cloud providers run on datacenter IP ranges that are publicly documented and actively flagged by services trying to filter non-browser traffic. My residential home IP behaved completely differently because residential IPs look like real users to IP reputation systems. No amount of header adjustment fixes a blocked IP range. The solution was to not deploy to a datacenter at all.
The system ran on my machine at home for the rest of the night and the following day. By around 6:30am, everything was running. I went to sleep.
I was at my desk doing something unrelated when Discord fired on my phone. The role mention came through with enough noise that I looked immediately.
The system had caught it. Match name, date, price range, all in the notification. I opened the ticketing site, found the stand I wanted, picked a seat, went through checkout without rushing.
3:34pm. Order confirmed. Under five minutes from notification to tickets in hand.
The people trying to buy manually at that same moment were probably still figuring out the window had opened. By the time a person notices the sale is live, navigates to the site, and gets to the checkout page, the better seats are gone and the checkout process itself is slow under peak traffic load. I was on the site within 30 seconds of the window opening because I didn't have to notice it manually.
To be clear about what this system does and doesn't do: it sends a notification. I still went through checkout myself, made the seat choice myself, paid myself. The system is a detection and alerting tool, not a purchase automation. Whether that distinction matters is a fair question, but it's the accurate description.
This was my first ever live cricket match. I'd never been to any kind of live match before - not IPL, not domestic, nothing. I'd watched hundreds of games on TV and thought I had a reasonable picture of what it would feel like. I was wrong about this in most of the ways that matter.
The match was RCB vs Delhi Capitals on April 18, and it was an absolute heartbreaker. RCB put up 175/8 batting first, with Phil Salt top-scoring at 63 off 38 alongside a quick 19 from Virat Kohli at the top. Tim David added a late 26 off 17, but DC's bowling - led by Axar Patel (2/18) and Kuldeep Yadav (2/32) - strangled the middle overs and the pitch played slower than expected throughout.
DC's chase was a thriller. Bhuvneshwar Kumar took three wickets and had DC reeling at 22/3 inside three overs, and RCB fans believed. KL Rahul built a composed 57 off 34 and kept DC in it through the middle overs. Krunal Pandya dismissed Rahul just when it looked like he might finish it, and RCB squeezed beautifully in the death - conceding just 13 runs across overs 17 to 19.
It came down to the last over. DC needed 15 off 6 balls. David Miller hit two massive sixes - one landed in the stands near where I was sitting and passed above my head - and DC got there with one ball to spare, finishing 179/4. Tristan Stubbs took Player of the Match for his unbeaten 60 off 47 that held the chase together through the middle overs after DC lost three wickets in the powerplay.
We lost. But I don't think I've ever been more present during any sporting event in my life.
The noise at Chinnaswamy doesn't translate through a broadcast. It's not just louder - it has a different quality. It fills the space around you rather than coming at you from one direction. Chants start somewhere in the crowd and spread across the full stadium in seconds, and when 28,000 people do the same thing simultaneously there's something physically different about it that I didn't expect.
Media playerAudio player with custom controls for playback, volume, seeking, and more. Use space bar to play/pause, Shift + arrow keys (←/→) to seek, and arrow keys (↑/↓) to adjust volume.
0:00/0:00
Every boundary gets a reaction that feels disproportionate to what happened on the field, and that's exactly right. The TV broadcast smooths that out. When Bhuvneshwar took those wickets early and the noise went up. The crowd around me was a mix you don't get anywhere else - families, groups, people who clearly hadn't watched cricket in years but showed up anyway, and diehard RCB kit wearers in every direction.
When the last over started and the whole ground knew it was going to be close, the stadium got quiet in a way I hadn't expected. Not silent - just held. When Miller hit those sixes, you could feel the air go out of the place. That collective deflation in a packed stadium is something I didn't know I'd ever feel, and somehow it made the whole experience more real, not less.
I paid face value for that ticket. I sat in a stand I chose based on what I was willing to pay. The seat was mine because I built something that got me there fast enough.
The technical lessons here are specific but generalize easily. Minimal headers beat spoofing because the fingerprint that matters isn't in your headers - it's in the TLS handshake. Residential IPs matter in ways you can't route around in software. Local execution often beats cloud for anything that looks scraping-adjacent. And public APIs serving frontend data are frequently unauthenticated even when you assume they're not, because they have to be accessible to serve the page.
The meta-lesson is that problems that look complicated under sleep deprivation often have a simpler shape once you stop building toward your first assumption. I built an auth system, a proxy, and a header-spoofing setup before trying the thing I should have tried at minute one. Every one of those was wasted time that came from not questioning the assumption that launched it.
There's also something worth noting about notification latency. Detecting availability is useless if the notification arrives 5 minutes later. The detection-to-Discord pipeline needed to be under 10 seconds end to end, and Discord webhooks with role mentions delivered that. The window between ping and confirmed purchase would have been impossible with slower notification infrastructure or any kind of batching.
I'll run this again. The specific details - the endpoint shape, the response format, the field names - any of that might change. But the pattern stays the same: find the simplest signal that indicates what you're waiting for and wire it to the fastest notification path available.
If you're an RCB fan who has missed sale windows staring at a refresh screen, the frustration is real and the system is structured in a way that makes the manual approach feel genuinely disadvantaged. The transparency concerns around ticket distribution have been raised repeatedly across multiple IPL seasons and haven't been answered clearly. What I built is one response to that. It doesn't fix anything structural - it just puts one more person on a slightly more level footing at the front of a queue that was already skewed.