I am not saying it's not complex but its finite number of possibilities.Īgreed. We need AI that would have implemented some general strategies for all teams and add some for specific teams - like bash war for bash teams, recognize surf occasions for teams with Frenzy, run-trough-long-pass-win-game for elves. We do not need self learning AI for thing like Blood Bowl. This kind of AI is much more easy to create. It just should behave at least somewhat like human, and use some common strategies. It it does not have to be created with idea of winning vs humans all the time, this would be quite pointless. But honestly, the only reason I might care about a decent AI for BB3 is that it might entice more people to play online and end up in our league.īDW AI does not have to be super class level. I'm also pretty sure it's possible to code a much better AI than the current one, even on a budget. Better to put it into pretty grass and orc cheerleaders, obviously. My guess is a good BB AI would cost a significant portion of the budget of the game. The problem is that it's a really tough job, and one that would have to be done by someone who understands BB really well and is a really good AI coder on top (neither which Cyanide appear to have access to). That being said, I doubt a 'good' BB AI is beyond what can be coded into a game run on a personal computer. This success is a result of not only the increased processing power available for brute force approaches, but of other, more economical solutions, like machine learning and optimisation and pruning of algorithms to remove obviously bad choices (for example).īut most of that isn't anything you can put into a commercial game. But remember, they've now coded an AI that beats 99.8 % of human players at Starcraft 2, so complexity isn't the problem it used to be. The processing power required for such operations increase exponentially for each move, and it gets silly quite fast. They had thousands of openings programmed into them manually, on top of which they had a brute-force approach calculating every possible board several moves into the future. The classic chess AIs (like the one that beat Kasparov) didn't do that. The Go AI 'learned' by playing matches against itself. I believe machine learning and brute force are different concepts. Imagine how many moves there are in BB - each player could move to maybe 150 spaces, plus you need to worry about the order of moves and more.ĭon't even think about adding player skills into the mix. In chess, there typically might be 30 moves available, Go perhaps rather more like 100(?) (Literally the AI program has 2 routines unique to the game type - one to list all the valid moves, one to check if the game is won) Google have been making the headlines for mastering Go and Chess, but it has done so using brute force computing - ie computer plays against itself, trying all possible moves and recording outcomes, then adjusting moves likelihood based on whether it won or not. I think a good BB AI is well beyond mankind (never mind cyanide) at the moment. Sorry to do be a bit of necromancy (not really), but I found this interesting reading. Of course, even a superhuman AI would be at the mercy of Nuffle, so there's that small consolation. Basically, teaching it to act human is harder than teaching it to act superhuman. I'm guessing if you write a machine learning AI that's awesome beating a particular game it's actually very difficult to scale it back so that it doesn't just repeatedly humiliate its human opponents.
0 Comments
Leave a Reply. |