• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle


  • It depends on the specifics of how the language is compiled. I’ll use C# as an example since that’s what I’m currently working with, but the process is different between all of them.

    C#, when compiled, actually gets compressed down to what is known as an intermediate language (MSIL for C# specifically). This intermediate file is basically a set of genericized instructions that are not linked to any specific CPU. This is useful because different CPUs require different instructions.

    Then, when the program is run, a second compiler known as the JIT (just-in-time) compiler takes the intermediate commands and translates them into something directly relevant to the CPU being used.

    When we decompile a C# dll, we’re really converting from the intermediate language (generic CPU-agnostic instructions) and translating it back into source code.

    To your second point, you are correct that the decompiled version will be more efficient from a processing perspective, but that efficiency comes at the direct cost of being able to easily understand what is happening at a human level. :)


  • The long answer involves a lot of technical jargon, but the short answer is that the compilation process turns high level source code into something that the machine can read, and that process usually drops a lot of unneeded data and does some low-level optimization to make things more efficient during actual processing.

    One can use a decompiler to take that machine code and attempt to turn it back into something human readable, but will usually be missing data on variable names, function calls, comments, etc. and include compiler-added optimizations which makes it nearly impossible to reconstruct the original code

    It’s sort of the code equivalent of putting a sentence into Google translate and then immediately translating it back to the original. You often end up with differences in word choice that give you a good general idea of intent, but it’s impossible to know exactly which words were in the original sentence.




  • Sure, but in Dark Souls there’s still significantly better design at work.

    Some differences in Dark Souls:

    • tutorial messages before the first major encounter explaining the controls
    • the difficulty scale between the entry level monsters and the boss is much smaller
    • the player can customize their build at least a little bit by selecting their starting loadout
    • the first boss is often difficult, but even if the player fails they can still progress the game since they’re expected to lose.

    AC6 on the other hand lacks all of that. It gives you no tutorialization. You’re told to use a sword against shielded enemies and then you’re suppose to somehow infer that the helicopter is also weak to swords. You’re meant to build up a stagger bar to open a window for big damage, but they haven’t even mentioned the stagger bars existence at this point in the game. You’re stuck in a single mech loadout with no way to customize.

    Imagine if you had to fully kill the asylum demon the first time you encounter it. You’ve got no plunge damage, no gear, no grasp of the controls, you’re just forced to walk out of the jail door and beat the boss before you can engage with any other elements of the game. That’s much closer to AC6’s presentation



  • It’s not like their complaints are entirely without merit though. I expected difficulty with a From Software game but usually there is an on-ramp to the difficulty.

    In AC6 that’s completely missing. You’re given four trivial fights with almost no tutorialization before being put against a boss that expects you to know about the (yet unexplained) stagger bar and also expects you to use your sword against the helicopter which is somewhat unintuitive, especially since you’ve only been told to use the sword once and it was against enemies with a shield which reinforces the idea that sword beats shield, and the helicopter has no shield.