A few other techniques like this allow tracking to be extremely resilient. After the massive ratelimit in May likely due to 0neb , it was just barely at the edge of what's even survivable for this exploit, but we pulled through, tweaked priorities, decreased scan net sizes, wrote some special cases for stationary players, and more. It's easy to forget, but this isn't just for checking chunks.
We created a very efficient base downloader by essentially "paint bucketing" or "flood filling" the modified blocks compared to 2b2t's seed to get all ground level structures. You'd be shocked how far people go out, millions of blocks, just to build a tiny fishing hut and afk.
It seeks out the locations in the chunk with the most changes, and rechecks around them periodically, as often as once every thirty minutes. This allows us to keep up with bases as new buildings get built or torn down. We also redownload from scratch every three weeks.
So, we don't just have coords of every base, we don't just have a world download of every base, we actually have the entire history of the construction and possibly destruction of the base, block by block, hour by hour, quite high resolution.
How do we actually make this happen, nitty gritty? The old version was super simple. A forgehax module was written to do the spiral search and write down the coords to a file. It's actually exactly the same as that elytra stash finder, except it does it remotely and doesn't have to fly. Fr1kin ran it at his house for over a year.
The files would be sent and added to a website that showed em all. We had to do something a little bit more complicated here. The base requirement was the communication between overworld and nether bots, and it sorta spiraled lol out of control from there. So, we start with the primary bot instance, which is a DigitalOcean droplet.
This was picked to have the absolute lowest ping to 2b2t. This allows us to squeeze the absolute maximum packets per second out of the connection, with close to instant detection of network lag causing us to throttle down to almost nothing. This allows the highest sustained checks per second across the day.
The server ran a very cute headless minecraft we developed. It's able to save on RAM by having a single Java process have multiple fully fledged minecraft instances. To be clear, this isn't anything fake like reminecraft or a proxy. This is a true full minecraft client, just with all the rendering and keyboard and mouse stuff ripped out. So that's the bots, but what does the master do? First and foremost it provides a layer of abstraction over the bots and 2b2t.
TCP connections can sometimes drop, the bots can get kicked from 2b2t, a bot can randomly die and respawn in a different dimension rarely , etc.
So, there is a system of priority based queues and checks. A "task" can be created, which is a straight line list of chunks to check. Or a task could be as simple as "check this particular chunk". The world and connection system handles the actual communication to the bots, and all that. If a bot drops offline, the pending tasks are reassigned to another in the same dimension.
If they all drop, it's held for when they reconnect. Sitting on top of all that, there is the system of scanners and filters. A scanner is something that tries to fish out people to begin with.
The simple mid-priority ones are just highways. We also do the overworld ring at 2k out VERY commonly once every sixteen seconds, at the moment. A little bit lower priority is the retry scanner, which looks at past bases and tries to suss out other members of the base that login at different times of day. It is continuous but low level: using up 5 checks a second. And finally lowest priority is the spiral scanner which just scans outward to k over and over.
It can take hours to days. It isn't really crucial, it only exists as a last resort "Well if you aren't doing anything else important, might as well". The priority system is simple and strict: all the highest priority tasks are done before any lower priority ones get to start. If all the time is spent on the filters which are highest priority , the scanners will never run at all. Once we get a hit from one of the scanners, we turn to the tracky tracky manager.
This checks if we already have a filter tracking this particular player, and if not, starts a new one. These filters are by default in monte carlo mode, as described earlier. The monte carlo mode is a great all-arounder.
It has a lot of backups where, when it loses track, it's able to take more of the check budget to find them again, but when it has a good lock on their position and speed, it subsides back down to about 1 check a second minimum. The average is a bit over two checks a second. When it thinks it might have lost someone, it can go up to ten per second, and if it's completely lost someone defined as no hits in 5 seconds , it declares the track over, it's lost them.
This triggers a hail mary check where we do a grid of 11 by 11 checks spread out 9 chunks apart. This is really expensive as you can see: that comes out to checks which is a full second of our budget.
It's worth it given how rare it is. We also trigger a similar check in the other dimension, assuming they went through a portal. But when it works, it works well. When the most recent hit in this mode is within a render distance of all hits in the last 30 seconds, we switch to stationary mode.
This is because we have determined pretty much exactly what chunk they must be standing in to explain the last couple dozen hits. We switch to just checking that one chunk over and over. We slowly get less frequent, all the way down to 1 check every 8 seconds.
This is 10x slower than the monte carlo mode. It turns out, the vast majority of our active tracks are using this at any given time. Generally people are either traveling, or are effectively stationary. If they never move outside of their render distance from that point, the stationary filter will just comfortably check in every 8 seconds. If the stationary filter misses, we switch back to monte carlo since they're clearly on the move now. All of these hits and tracks are stored to the database.
Every single time we get a response from 2b2t of "HIT" this chunk is loaded , we save it to a giant table of hits. We've had over three billion of these. Each hit is also associated optionally with a track. This is what makes the data useful.
Every track is assigned an ID. When a track switches modes from monte carlo to stationary and back, we don't change the track ID. But when a new track has to be started, such as when they change dimensions, it gets a new ID.
To keep track of that, each track gets a "previous track id". This allows us to trace back the history of how people have traveled. A hit is as simple as an ID number, coordinate, server ID almost always 1, for 2b2t, but we've briefly ran this on 9b and constantiam too , dimension, track id optional , and whether or not it's legacy.
There are about three billion of these. A track is a bit more complicated, it has an ID of its own, the ID of the first hit in this track, the ID of the most recent hit in this track this is updated constantly , the timestamp of that most recent hit, the dimension and server ID, the previous track ID, and again whether or not it's legacy. There are about ten million of these. For example, when a player logs into the server, we look up the most recent time they left, then we lookup all tracks that were last updated around there, give or take a minute.
We take a guess that maybe this player logging out was what caused those tracks to end, so we attempt to "resume" all those tracks from that time, by just rechecking their last successful hit. This is extremely successful and is a crucial component of tracking people. Now we have the system that takes in this data and decides what to do with it.
I haven't mentioned anything yet about deciding what's a base, and what's just an area someone's flying through, or how we determine who is where, or how we decide what areas we're going to world download.
This first step is called the aggregator. Every once in a while it looks through the hits table through all hits that are new since last time. It filters out hits that are within chunks of a highway, or within chunks of spawn.
It also ignores the nether. Then, it looks up our clustering entry for this chunk coordinate. This is a giant table called dbscan. Unlike the hits table, this table will only ever have one row for a given chunk coordinate in a given world.
It keeps track of all the activity we've observed at that chunk. When a new hit comes in, we mark down the most recent hit as that timestamp. We also update our list of "timestamp ranges".
It's actually really simple: if two hits come in with less than 2 minutes of time in between, we make an educated guess that a player was loading that chunk for that whole duration, and we add that range from first hit to second hit to our list of timestamp ranges for that chunk. If a new hit comes in that's also within 2 minutes of the most recent, we increase the range. If a new hit comes in that's outside the 2 minute range, we don't mark the range in between as occupied.
This allows us to keep track of what chunks are actually occupied i. For example, our stationary filter will do this easily. The monte carlo filter will as well, but unintuitively.
Even though it spams checks all throughout your render distance to determine exactly where you likely are, it will in all likelihood hit any specific given chunk about once a minute. The purpose of the timestamp ranges system is so that we can keep track of how interesting a given area is, without bias by our two filter modes, which are wildly different from each other: stationary sends less than one tenth the amount of checks per second as monte carlo, and monte carlo spams checks all throughout the 9x9 render distance while stationary just checks one chunk over and over.
Creating this way to count up how much time has been spent at a given spot, while maintaining completely neutral behavior between either filter being used, was the hardest part here. Thanks for the help. I've been looking my ass of for that aswell. Originally Posted by trentonjeffro. Vouch Thread. I know two exploits. One that I 'made' myself, and one that a friend found out about.
I'm not sure if that worked with buycraft though. The one that I 'made' myself, is still working. Originally Posted by niels Originally Posted by Smileyblaster. The one that changed the package price to 0. Yes you can! If you would like to contribute to this codebase, feel free to hack away at it and make a Pull Request.
All brands and trademarks belong to their respective owners. Vanilla BuyCraft is not a Mojang-approved software, nor is it associated with Mojang. Skip to content. Star Branches Tags.
Could not load branches. Could not load tags. Latest commit. Git stats 20 commits.
0コメント