Detective fiction, and particularly whodunits, have been really good at being engaging people in attempting to solving the mystery presented before the final reveal. Video games allows such stories to thrive with a level of interactivity that can directly engage the player in this process as an active actor instead of remaining passive.
One interesting idea of empowering the player is to give them the burden of proof as an intellectual challenge. That is, as long as the player is unable to prove to the game that they have solved the mystery in part or in full, the game will not progress past certain points.
However, something that always bugged me in detective games is how I never feel like I'm the one that's actually figuring out the solution. Sure, I solve it in my head, but I never feel like my progression in such games is directly related to my own reasoning skills. As I've been playing with the idea of potentially creating my own detective game at some point, and potentially one where cases would be randomly generated, I wanted to find a solution.
So today, we'll be looking at how video games can assess players on this regard... or rather, fail to do so.
Following the clues
Many games are more focused on the investigation aspect rather than the puzzle-solving aspect of the mystery, with characters making revelations themselves once certain clues are discovered, either by admitting everything or through the protagonist's deductions being publicly stated.
However, some games can implement an indirect knowledge assessment through the investigation itself by forcing the player to look for clues at obscure locations as a way to ensure that they understood an important element of the mystery. Note that this is not limited to objects, but also more abstract location-searching such as asking the right question to the right character, finding a relevant query while searching a database, or typing the correct URL to a website.
While this is a valid and very natural approach of assessment, it can only be applied to location-specific puzzles, and can be exploited by checking every single location. This can be mitigated by putting a lot of locations, although this mitigation can only be successfully applied to a small subset of such puzzles.
On an unrelated note, this assessment type also has the side-effect of potentially causing the player to get stuck or lost if they do not realize how to progress, which can be very frustrating.
Simply put, the game will ask the player at some point in the story a series of questions about their thoughts on the current mystery. While it's usually through multiple-choice questions, sometimes the player may be asked to input simple answers instead.
However, this approach can in general be easily exploited. The list of possible answers are in general fairly small, and the question itself may imply the answer. The latter is especially true for multiple-choice questions as it may cause the player to consider possibilities they might have otherwise overlooked. Once again, the only mitigation possible is to significantly increase the number of possible answers, which can only be done successfully to a small subset of questions.
Game designers also have to be careful about the implementation of this assessment type. Not only can it be difficult to properly integrate naturally into the story, but if the player has an epiphany about the mystery before the quiz assessment, they might get frustrated when their character makes dumb decisions or refuses to make good decisions simply because they know better, ruining the related story.
Because of these, this method of assessment is best used to test the player's memory about the details of a story rather than their reasoning abilities. Despite this, it remains a very popular progression barrier to test reasoning due to its simplicity.
Deductive and inductive reasoning
Some games go the more abstract route and implement some kind of logic system where the player is presented a series of premises learned during the investigation, and match them to deduce conclusions, which then becomes new premises or unlock progression. Premises are automatically given to the player based on the observations of the player character, such as collected evidence and testimonies.
Unfortunately, this system is also very vulnerable through brute-force, as the total number of premises remains in general fairly small, and each premise can be checked against all other acquired premises quickly and easily.
Still, this presentation format may cause the player to think more deeply about the gathered information that it would have done through the previous solutions, which is a significant advantage.
The main problem with this solution however is when the selected premises leads to a different conclusion than expected, or no conclusion occurs at all. For example, given the premises "fresh tree leaves are under the bedroom's rug" and "the bedroom's window is closed and locked from the inside", one may infer "the bedroom's window was locked after the rug was placed in it", which is not a guaranteed conclusion and may even seem unlikely depending on the context. This may break immersion as the player will no longer be in sync with its character.
The core issue
As we've seen, almost all of the traditional solutions are vulnerable to brute-force attacks. In other words, if someone doesn't figure out the mystery, they can simply guess it until they get it right.
Is there an alternate solution that doesn't have this issue?
What about abductive reasoning?
While deductive reasoning allows for finding new information from known information, and inductive reasoning allows for finding possible generalizations of known information, abductive reasoning allows for finding possible causes of known information. Ironically, I've never witnessed the latter being used as a game mechanic in detective games, even though finding the cause of mysteries is their primary appeal.
Let's consider real-world mysteries for a moment. Excluding philosophical objections about how reality works, it is generally accepted that there is only one single true explanation for each mystery. The issue is that no one can ever be sure of the correctness of any given explanation.
Science's solution to this problem is to use Occam's razor. For those unaware of this concept, it can be described as the following:
Valid theories (i.e. non-contradictory and consistent with past observations) that require more assumptions than other valid theories must be rejected.
This principle can also be applied when judging trials. For instance, in modern justice systems, civil cases are judged by the "more than 1/2 probability" standard, while criminal cases are judged by the "no reasonable doubt" standard. While there is no agreed formal definition of what reasonable doubt is, I would like to propose the following one:
A probable event has reasonable doubt if and only if its probability given any valid theory not rejected by Occam's razor is less than 1.
Some might argue that a probability of 1 in this definition is extreme, but it is the only one compatible with the legal principle of presumption of innocence. Regardless of your personal opinion on this matter, I hope the general idea of this definition matches your intuition of what reasonable doubt is just like it does mine.
The problem then becomes figuring out all of the smallest sets of assumptions that would explain the mystery. This process is called logic-based abduction.
One interesting consequence of such an approach, at least in theory, is that it can prevent some game design errors. For instance, if a mystery is not solvable or leads to a conclusion different from the intended truth, this abductive reasoning would still lead to a correct answer.
Let's suppose for a moment that we would like to create a game system where players would be able to perform logic-based abduction from collected evidence and testimonies, and evaluate their reasoning based on the "no reasonable doubt" standard. A natural approach to this problem would be to let the player build its own theory, measure its number of assumptions, and compare it with alternative theories containing at most the same number of assumptions to determine if one of them has a different conclusion or is less complicated.
Of course, we must first ask ourselves what a theory must absolutely cover to be considered sufficient as an answer. Obviously, it must answer the primary motivation behind solving the mystery, e.g., whether a crime actually happened and identify its perpetrators. However, simply saying "X did Y" is nothing more than a blind accusation without explaining why this assumption is better than "X didn't Y". More has to be done to explain how particular assumptions caused the observable evidence. On the other extreme, having to explain how an entire scene was built doesn't make sense either as it would be too tedious to do so. The part that should be explained has to be relevant in some way to the mystery, at least in appearance, but which criteria should be used?
A classic approach to this problem is to cover means, motive and opportunity. However, this is insufficient. Means and motive merely reduces the number of assumptions needed for a theory, not make one. As for opportunity, it can be a good basis for a theory, but by itself isn't an explanation.
I believe a good solution here is to separate observable information into two sets: the "normal" set and the "suspicious" set. The "normal" set would be composed of information that can be explained by ordinary situations with a reasonable prior probability but still available for theory-crafting, such as the existence of a potted plant in a living room. Meanwhile, the "suspicious" set would contain information whose prior probability of existence in ordinary situations is below the reasonable threshold but might still be ignored as an unrelated red herring during theory-crafting, such as a bloody knife. In general, it should be relatively easy for game designers to judge in which set to place information, so while it might be worthy of further analyzing how to properly define the proper prior probability threshold, I will leave it as out of scope of this post.
With these "normal" and "suspicious" sets of information, the following could be implemented: the player sets up an initial scene based on the real one, then builds a sequence of events that leads to the final known scene. The following could then be counted as assumptions:
- Non-suspicious evidence in the final scene being modified with a different property in the initial scene.
- Suspicious evidence not having its suspicious properties modified in the initial scene.
- Hypothetical evidence added to the initial scene. (The explanation as to why that evidence was not found in the final scene is handled independently.)
- A step in the proposed sequence of events that is not a natural consequence of the prior states or not directly explained by known information.
- A piece of suspicious testimony that is not explained by the proposed sequence of events.
- A piece of testimony that contradicts the proposed sequence of events and whose contradictions cannot be explained by other known information. (Note that it is not the total number of violations that matters here, but the required number of assumptions that explains all the violations.)
A more throughout analysis would be required to determine how to properly define the scope of each assumption and how to alleviate the player's work in explaining their theory, but that would be the general idea.
One big factor to mention here is the quality of the simulation to build a sequence of events from. Consider a player that wants to use the properties of elements to implement melting ice or a chemical reaction to explain the destruction of evidence, or that needs to model the spread and concentration of carbon monoxide from a car to different areas of a room to explain how a witness fell asleep in it, or that wants to build a Rube Goldberg machine to explain a locked-room mystery. The limits of the simulation should be considered carefully to allow sufficient creativity, which is not trivial.
Implementing reasonable doubt
Implementing the above system still leaves the question of how to test the player's theory against others.
A possible solution would be to involve a second player attempt to discredit the detective player's theory by building a new one, similar to how a defense attorney would attempt to discredit the prosecution's theory during a trial. If the defense's theory has at most the same number of assumptions than the original theory while pointing to a different answer to the mystery, the original player would lose, otherwise it would win.
Is it possible to automate the process of building a defense perfectly and within the constraints of the simulation though? Unfortunately, logic-based abduction is a set cover problem, which is NP-complete in the general case. Hence, unless some major breakthrough can be found, there is no way to solve this problem other than checking every single combination, which requires an exponential number of tests. This quickly becomes unmanageable to compute.
Some compromises exist however. As the mystery needs to first be crafted by game designers, the fictional truth behind the mystery can be converted into the same theory format that players use and potentially simplified from needless but accurate assumptions, which can be used as a reference theory. Also, it is possible to design an artificial intelligence using some heuristics to automatically build alternative theories as well, although its performance would need to be assessed. Not perfect, but potentially good enough.
Is it worth the trouble? Probably not for manually-crafted mysteries as they involve a lot of narrative, but it might be a good approach for randomly-generated ones. Still, it would be quite the challenge to implement. Whether such a challenge is reasonable remains to be seen.