Nothing is changed and I now lose 100% of the games.... It makes weird moves now and can leave a threat untouch on the board etc. Why?
But it got so retarded after the last BETA update. 6 games in a row it could finished the hero but it refused to do so.... I could do 12 damage and hero had 6 and it just kept killing minnions instead of finishing the hero. That's a retarded AI if I may say so
Which two Beta version #s are you talking about? I can tell you specifically what changed between both. I don't think any of the exception fixes should have affected the bot, but we'll double check once we know which versions you are referring to.
I noticed the change in behaviour from HearthbuddyBETA 0.3.736.76 and the one just before that. Something must have been changed since it behaves completley different now and don't finish the hero off when it can. It's pretty stupid to have the power do do 12 in damage and the hero have 6 and just chase minnions and lose the game just because of that I lost 6 rankings during that test. Didn't record what BETA I used when ranked up to 15 so I can't tell the version number where Hearthbuddy did kill hero when possible instead of trying to clear the board all the time.
I'll go over specifically what changed in those around you mentioned: #76 - A null reference exception was fixed to solve this issue. The actual change shouldn't affect AI logic, because the change I've added is consistent with the rest of how SF handles null targets in the other functions. It's possible this fix is not implemented correctly, but then, fixing the exception would be tracking down a design issue in SF, which I'd have to ask botmaker about. #75 - TritonHs.Concede logic updated to solve this issue. Nothing else changed in that commit in terms of AI behavior. #74 - "* SF should now calc card placement.". The setting "HREngine.Bots.Settings.Instance.simulatePlacement" was set to true rather than false. This is line 98 of Routines\DefaultRoutine\DefaultRoutine.cs. You can try changing this to false, saving, and restart the bot and test again. There were a few API changes, but worse case, the bot plays cards in slots that the client shifts left or right, as opposed to logic breakdowns as you've mentioned, so I don't think that affects it. Those are the only 2 real changes to the AI during that time that could have any impact. I'll give you two specific things you can change and retest and report back later to see if it seems "better" or not. I suggest notepad++, but you can use anything with line number support or just ctrl + f the code: 1. "Routines\DefaultRoutine\DefaultRoutine.cs", Line 98 Change: Code: HREngine.Bots.Settings.Instance.simulatePlacement = true; // set this true, and ai will simulate all placements, whether you have a alpha/flametongue/argus To: Code: HREngine.Bots.Settings.Instance.simulatePlacement = false; // set this true, and ai will simulate all placements, whether you have a alpha/flametongue/argus 2. "Routines\DefaultRoutine\Silverfish\ai\PenalityManager.cs", Line 98 Change: Code: if (target != null && !target.own && !this.tauntBuffDatabase.ContainsKey(name)) To: Code: if (!target.own && !this.tauntBuffDatabase.ContainsKey(name)) It should be noted, your bot will throw an exception and stop if that logic is actually hit under the scenario it does. Make sure to save, and restart the entire bot, so the changed files are reloaded. I would suggest with only changing #1 first and retesting, and then trying #2 only if it still seems as bad. I'd be fine with reverting #1 and investigating it further first, but #2 really shouldn't be causing issues to justify making the bot stop each time (until we figure out why it's happening).
Nope, not yet. We've only made minimal changes so far, and need to work out actual SF/HsB integration issues first before we look into making SF run faster by doing what silver.exe did.