I read the code
Very impressive, and first point it means the code isnāt a half garbage. But did you read all the code? The local logic is a thing and even more with clean code, but itās hardly always the thing.
No, itās tedious and I have only time when my kids are to bed. I program in Java, so reading .NET with different programming habits is exhausting.
This is only the part on the ratio that influences the deployment size (what the Pandorans can throw at you). It also uses the āthreat levelā (low, medium, high).
The code is quite easy to go through. There are good practices that have been used.
Ok a pro, I was very impressed, still but a bit less.
So you see well or hardly but still see the base logic targeted and expected, but not that easily the bugs and side effects. And when itās multiple systems, and more complexity, side effects are hard to totally avoid, plus thereās the bugs generating side effects.
The more code, the more potential for bugs. Itās hard to give this amount of flexibility and think about all use cases.
The code seems quite clean to me and nicely done. It is readable and seems maintainable which is a big plus.
Good, but thereās bugs that are big clues itās not true throughout all the code, but that itās in general very clean code is for sure a big plus for them.
I wonder if many game teams just targeted directly the problem also with automatic testing as did Larian with DOS series. No matter how clean the code is, lack of this is also a source of regressions.
Their unit tests do not ship with the code . So I can only assume that they have many of them to protect against regression when refactoring.
The big problem being that time to do tests competes with time to do āgameā code. Working in software, I know that the testing part was always seen as bad return on investment (by non devs - and some devs I knew). It took some time to make them understand that it was worth it (mostly by making tests and showing them all the bugs that they had and never found).
Lol, not many people continue long to believe they spite out perfect code without testing, and testing not automatic is so tedious so so quickly forgotten.
Very true indeed, Iām not a programmer nor coder. But I know how hard it is, even with easy user friendly GUIā¦still it is not an easy task to do .
sorry if I bother you, do you have any idea about how to solve this issue?
Is it using PPDefModifier ?
Because if thatās the case, I see by the code that it should interpret Lorem.Ipsum[0].etcaetera correctlyā¦
unless there is a bug of course but just went quickly through the code
You should try with:
"field": "ResourcePack.ResourceCost.Values[0].Value",
Seems you have to specify the type first
If this is the reality of the DDS, then that would explain quite a lot, actually; by design that system punishes loss-adverse behavior because it only dials back once the playerās been given their lumps. It respects neither the re-loading casual player unwilling to accept a negative outcome nor the power gamer that takes increasingly exploitative steps to stay two steps ahead. What it does seem to shake hands with, however, is the soft-ironman player thatās willing to take a loss here and there.
Have you been able to determine whether there are any adjustments/bounds limits on the system based on the difficulty setting?
The difficulty settings is still applying a factor for deployment numbers.
Since the DDS generates a ratio between 0 and 1, Iād say that it helps players that have trouble advancing.
So, in legendary, you can do a little sacrifice once in a while (since recruits cost only food) to have an easier playthrough.
So what Iām hearing is the difficulty setting only adjusts what is derived from the algorithm rather than constraining the algorithm itself, which is a key distinction. If say the lowest difficulty capped the adjustment at 0.7 as opposed to 1 then the game could only get so much harder than the ābaselineā experience and at a certain point it simply āallowsā the player to keep winning, but as it is it instead keeps trying just as hard as on the harder difficulty settings to eventually start beating the player and reach an equilibriumā¦which may not be the experience a player is looking for on a lower difficulty setting.
Without the ratio, the game would be difficult as some have experienced. Letās say it is the true experience.
The ratio enables to give easier missions if the player starts to lose too much. It eases up. So you can take advantage of it if you want. Send a rookie alone to a mission and get him killed. Your mission result will be such a failure and the ratio so close to 0 that youāll start having very easy missions.
There are other things at play, but at least this part is used to know how heavy is the enemy deployment.
Capping the ratio wouldnāt stop the algorithm from adjusting to the opposite direction you know.
I think you are missing (or not showing) some other key part of code, this one does provide a value between 0 and 1 (does it?) but we donāt know how itās handled after that.
I think the hard cap should probably be in whatever function is processing those outcome values. Right now it can apparently grow without a limit, you could cap it e.g. between 0.7 to 1.0 to limit just bad things can become on a given difficulty level.
Well sorry, I havenāt read this code anymore. I was busy working on my first mod: enable the use of 3 mutations.
Just tested and it seems to work.
Iāll see how to package it for Nexus ā¦ but now I need some sleep.
Looks like there are no cap between difficult levels, but different in resources. PP difficulties is a mystery . Imho the way AI could think and behave should be limited for each difficult setting, for instance a crabbie should be dumb at rookie and godlike at legendary difficult level. I donāt knowā¦I like nemesis system in shadow of mordor, they could learn players move.
Or rather, the cap is the same. When the player is acing mission after mission the game keeps getting harder until the player either stops acing or it reaches the upper bound of what itāll ever throw at the player and that upper bound is the same regardless of difficulty. With how the math works, it also means that the lower the difficulty, the more drastic the adjustment over time.
Hey, it was s a comment, not a demand. I merely think thereās some kind of integral value function thatāll actually sets the difficulty level and it takes that function as an input. You did mention rolling average?