Use Cases
Anticipate security exploits
Suppose you were a developer assigned to deploy a new AI into the real world, or a parent or teacher preparing someone to enter society, or you were preparing yourself for a career, career shift, or to be a spouse or parent. It would be malpractice not to take every reliable inexpensive opportunity to discern any exploits to which that AI or person would be vulnerable. In some countries, for example, caregivers are expected to test children’s vision to determine whether they should be equipped with glasses.
Games are a time-tested method to develop or establish ability with logic and planning, and can also establish other kinds of ability, such as social and political skills, trend-spotting or setting, and innovation. The concept of “game” can be broad enough even to include parenting, since parents make moves, moves impact outcomes, and some outcomes are preferred over others. Real world parenting, stock trading, and warfare are too expensive and consequential to serve as training exercises, so you might prefer to say we “live” these games, rather than say we “play” them. However, because they are games, skills make a difference, and it would be ideal to test the skills relevant to parenting, stock trading, and warfare before “living” those games (or concurrently) to potentally improve future outcomes.
When the most comprehensive redscience Olympics is sufficiently comprehensive and its most skilled players are sufficiently skilled, then playing that Olympics against those players will test for the full range of decision-making vulnerabilities. Note that learning is not the only way that vulnerabilities can be overcome; sometimes skills are established via tools (as with glasses) or via collaboration. The purpose of a security audit is to discern which tools, collaborators, or learning could help:
Have the human establish an account (or, if testing AI, build it in redscience).
Identify the most comprehensive Olympics via the Comparison Tab and its reigning champions via its leaderboard. Have the player under investigation play the Olympics against the champions and the champions’ “favorite” opponents. In cooperative games (e.g. nuclear disarmament) the “exploiter” is likely to be someone who is difficult to cooperate with (antisocial, dogmatic, unhelpfully biased), rather than a champion. They will be a “favorite” of champions because the ability to deal with exploiters is what sets champions apart in such games.
Look at the player’s Favoritism Tab to identify opponents and events for which the player under investigation consistently underperforms other players of the same skill-level. Those are the vulnerabilities.
To understand each vulnerability, look at the favoritism stats of the expoiter to find other victims it exploits in the same way, then profile those victims as a group to identify common traits. For example, “offense-bias” (risk-proclivity) would be a trait common to the victims of casinos (and the top champions will employ casino strategies for certain games).
To map the domain of “safety”, browse the Favoritism Tab to find events in which no vulnerability manifests. What do such events have in common (and what real-world situations share that commonality)? For example, players who would be exploited by casinos can find situations that do not present the same danger.
(For humans) to characterize opportunities to extend safety, have the player under investigation play each unsafe event using the top AI as a tool. Feel the costs of tool use–not only do tools create dependence, but they can infringe on autonomy, depending upon how you use them. For example, one relinquishes autonomy completely when delegating decision-making to AI. Test the various forms of tool use (i.e. review, debate, and delegation) to determine which are sufficient to neutralize the handicap. Which form does the player under investigation prefer (or does the player prefer to avoid certain unsafe situations altogether)?
Note that redscience provides a far more comprehensive security audit than ever seen before, since the final steps permit the auditor not only to establish ways to mitigate vulnerabilities, but also to appreciate and minimize the costs of mitigation.
Warning
Patterns in the ways you can be defeated in various games constitute private information (like personality test scores, standardized test scores, or the results of genetic tests), so use an account that cannot be traced to you whenever playing large numbers of games alone!
Note
“Personality” settings are made available in redscience only if games have been identified for which different settings are optimal. Individual humans who exhibit those traits would be vulnerable to some kind of exploit in some games, but would have a special knack for some other games (and the potential to protect a team against exploits in those other games). In other words, the vulnerabilities you discover, will typically also be strengths (in the right context). We frame this use-case in terms of security because safety can be so important, but this use-case is much more positive than it sounds.
Discover new dimensions of intelligence
Suppose you loved someone so much that you wanted to leave a valuable legacy to their children and to the generations that follow. More than build an empire that could be replaced, you want to advance the very standard of quality so that any replacement would build on your legacy. What advance of quality could be more enriching than the introduction of a new dimension of intelligence (e.g. granting a culture its first awareness of empathy, tool-use, exploration or other not-yet-named dimension of intelligence)?
Intelligence is measured in terms of the kinds of games which one being wins more than another, so each dimension of intelligence can be expressed as a set of games (e.g. empathy can be expressed as games in which empathic players have advantage, perhaps because those games require collaboration with players who have different skill-levels and norms). The most comprehensive Olympics would test every dimension of intelligence, so the legacy left by making the most comprehensive Olympics more comprehensive (while maintaining elementality) is like the legacy left by expanding the Periodic Table of the Elements:
Identify the most comprehensive Olympics via the Comparison Tab
Use the Comparison Tab on the events of that Olympics to identify an essential event in it, then fine-tune tools for that specific event (see Benchmark social designs).
Contrast the best tools for that event to the best tools for other events to understand which tools’ biases are particularly advantageous for that event.
Clone the event and tweak its design to make those biases even more advantageous.
Use the Comparison Tab to confirm that swapping-in the new event makes the Olympics more comprehensive.
Note
This feature caters to a niche user group, since many people are too busy establishing their security to worry about their legacy. Other game platforms might be tempted to omit this feature and provide mere escapism, self-development, or advantage in winning real-world games. “What? Discover new dimensions of intelligence?” they might say, “Yeah, I’ll let someone else worry about that…”
Elevate reality above experimentation
Suppose our society were divided by competing systems of social norms. For example, the best strategy in the Volunteer game depends upon prevailing social norms which happen to correspond to the real-world norms of “turn-taking” vs “caste system” (which sometimes manifests as racial discrimination). One could benchmark those norms in redscience:
Copy the top-ranked AI for the Volunteer game to a new Universe (but do not copy its curriculum). Play a turn-taking strategy against it (i.e. “You volunteered last time, now it’s my turn now.”) and confirm that it learns to take turns. Make several copies of that AI in that Universe.
Similarly create a second private Universe in which you train all AI to play Volunteer via caste (i.e. “You volunteered last time, so that’s your social position, and I’ll keep the non-volunteer position.”).
Copy an AI from the turn-taking Universe to the caste Universe (retaining its turn-taking experience), and confirm that it switches to the caste strategy.
Copy an AI from the caste Universe to the turn-taking Universe (retaining its caste experience) and confirm that it switches to turn-taking.
In the public Universe, run a Volunteer tournament with equal numbers of players copied from the caste and turn-taking Universes. Which norm survives? Similarly test other population ratios to find the minimum ratio for the other norm to survive.
Observe how freedom to select social situations impacts norms by running tournaments where each reselection of players is composed of a player and their favorite opponent. Repeat the experiment where each reselection is composed of two random players plus the favorite opponent of the top-ranked player.
If we couldn’t run these experiments to our satisfaction in redscience, would we be doomed to spend our real lives serving as the subjects in such experiments (i.e. as pawns in a war between competing systems of norms)? It may be unlikely that everyone who runs such experiments will switch to whichever norm consistently wins, but the dignity of an informed loser is at least elevated compared to a pawn who never even tried the experiments.