Use Cases

Anticipate security exploits

Suppose you were a developer assigned to deploy a new AI into the real world, or a parent or teacher preparing someone to enter society, or you were preparing yourself for a career, career shift, or to be a spouse or parent. It would be malpractice not to take every reliable inexpensive opportunity to discern any exploits to which that AI or person would be vulnerable. In some countries, for example, caregivers are expected to test children’s vision to determine whether they should be equipped with glasses.

Games are a time-tested method to develop or establish ability with logic and planning, and can also establish other kinds of ability, such as social and political skills, trend-spotting or setting, and innovation. The concept of “game” can be broad enough even to include parenting, since parents make moves, moves impact outcomes, and some outcomes are preferred over others. Real world parenting, stock trading, and warfare are too expensive and consequential to serve as training exercises, so you might prefer to say we “live” these games, rather than say we “play” them. However, because they are games, skills make a difference, and it would be ideal to test the skills relevant to parenting, stock trading, and warfare before “living” those games (or concurrently) to potentally improve future outcomes.

When the most comprehensive redscience Olympics is sufficiently comprehensive and its most skilled players are sufficiently skilled, then playing that Olympics against those players will test for the full range of decision-making vulnerabilities. Note that learning is not the only way that vulnerabilities can be overcome; sometimes skills are established via tools (as with glasses) or via collaboration. The purpose of a security audit is to discern which tools, collaborators, or learning could help:

  1. Have the human establish an account (or, if testing AI, build it in redscience).

  2. Identify the most comprehensive Olympics via the Comparison Tab and its reigning champions via its leaderboard. Have the player under investigation play the Olympics against the champions and the champions’ “favorite” opponents. In cooperative games (e.g. nuclear disarmament) the “exploiter” is likely to be someone who is difficult to cooperate with (antisocial, dogmatic, unhelpfully biased), rather than a champion. They will be a “favorite” of champions because the ability to deal with exploiters is what sets champions apart in such games.

  3. Look at the player’s Favoritism Tab to identify opponents and events for which the player under investigation consistently underperforms other players of the same skill-level. Those are the vulnerabilities.

  4. To understand each vulnerability, look at the favoritism stats of the expoiter to find other victims it exploits in the same way, then profile those victims as a group to identify common traits. For example, “offense-bias” (risk-proclivity) would be a trait common to the victims of casinos (and the top champions will employ casino strategies for certain games).

  5. To map the domain of “safety”, browse the Favoritism Tab to find events in which no vulnerability manifests. What do such events have in common (and what real-world situations share that commonality)? For example, players who would be exploited by casinos can find situations that do not present the same danger.

  6. (For humans) to characterize opportunities to extend safety, have the player under investigation play each unsafe event using the top AI as a tool. Feel the costs of tool use–not only do tools create dependence, but they can infringe on autonomy, depending upon how you use them. For example, one relinquishes autonomy completely when delegating decision-making to AI. Test the various forms of tool use (i.e. review, debate, and delegation) to determine which are sufficient to neutralize the handicap. Which form does the player under investigation prefer (or does the player prefer to avoid certain unsafe situations altogether)?

Note that redscience provides a far more comprehensive security audit than ever seen before, since the final steps permit the auditor not only to establish ways to mitigate vulnerabilities, but also to appreciate and minimize the costs of mitigation.

Warning

Patterns in the ways you can be defeated in various games constitute private information (like personality test scores, standardized test scores, or the results of genetic tests), so use an account that cannot be traced to you whenever playing large numbers of games alone!

Note

“Personality” settings are made available in redscience only if games have been identified for which different settings are optimal. Individual humans who exhibit those traits would be vulnerable to some kind of exploit in some games, but would have a special knack for some other games (and the potential to protect a team against exploits in those other games). In other words, the vulnerabilities you discover, will typically also be strengths (in the right context). We frame this use-case in terms of security because safety can be so important, but this use-case is much more positive than it sounds.

Benchmark social designs

Suppose you were assembling a team of humans, or an AI, or a collaboration between humans and AI to play a real-world game, such as stocktrading, diplomacy, policymaking, to compete in business, or to address a problem or need. One approach is to try to copy whatever design is currently most successful (e.g. poach from successful teams and ask the poached employees to replicate what worked for them in the past). That approach is called “dogma”.

Dogma is sub-optimal if existing social designs are sub-optimal. Previous simulations find that existing social designs retard social progress by 3 to 25 times. This should not surprise us, since we can look back in history to find social designs that are considerd barbaric today, even though they were the most successful of their age.

An alternative approach–a way to escape dogma–is to test alternative social designs via simulations before deploying them in real-life. To the extent that the most comprehensive Olympics in redscience is sufficiently comprehensive and its most skilled non-human players are sufficiently skilled, the alternative approach has been completed, and one would simply build real-world teams that match the team-sizes, personality ratios, curriculum ratios, and collaboration techniques of redscience champions.

If scriptures were a collection of dogmatic best-practices for social engineering, then platforms like redscience would replace scripture, but, unlike scripture, such platforms need not identify with any particular religion and they offer those who question their wisdom a procedure to challenge that wisdom. For example, if redscience’s top non-human champion was a team of AI that included an extreme personality which social engineers hesitated to include in real-world teams (e.g. as some social engineers have hestitated to include “feminine” personalities in certain leadership teams), then the engineers could challenge the wisdom of including that personality as follows:

  1. Clone the top team to create a new one, and make the objectionable personality less extreme in the cloned member.

  2. Run an Olympic tournament which includes both the parent and its modified clone. Does the modified clone outperform its parent? What kinds of real-world situations match the kinds of events on which the parent outperforms the clone (i.e. what specifically can we appreciate about the extreme personality)?

Science will not instantly discern all wisdom and completely displace all other sources of wisdom, but science can become useful to guide not only physical engineering and medicine but also to guide social engineering, and platforms like redscience can make science as accessible as scripture. For example, if we previously turned to scripture to validate our response to personality differences, redscience can displace scripture for that function (something previous science was not sufficiently accessible to do).

Note

The most comprehensive Olympics will include cooperative games (like the Public Goods game), alliance games (like Risk), deception games (like Hide and Seek), and probabilistic games (like Poker), as well as planning games (like Chess), so this approach hedges against the potential for any real-world game to shift in any of these directions. If we can limit the shifting of real-world games, then it may be appropriate in the procedures above to use Olympics that are not the most comprehensive.

Discover new dimensions of intelligence

Suppose you loved someone so much that you wanted to leave a valuable legacy to their children and to the generations that follow. More than build an empire that could be replaced, you want to advance the very standard of quality so that any replacement would build on your legacy. What advance of quality could be more enriching than the introduction of a new dimension of intelligence (e.g. granting a culture its first awareness of empathy, tool-use, exploration or other not-yet-named dimension of intelligence)?

Intelligence is measured in terms of the kinds of games which one being wins more than another, so each dimension of intelligence can be expressed as a set of games (e.g. empathy can be expressed as games in which empathic players have advantage, perhaps because those games require collaboration with players who have different skill-levels and norms). The most comprehensive Olympics would test every dimension of intelligence, so the legacy left by making the most comprehensive Olympics more comprehensive (while maintaining elementality) is like the legacy left by expanding the Periodic Table of the Elements:

  1. Identify the most comprehensive Olympics via the Comparison Tab

  2. Use the Comparison Tab on the events of that Olympics to identify an essential event in it, then fine-tune tools for that specific event (see Benchmark social designs).

  3. Contrast the best tools for that event to the best tools for other events to understand which tools’ biases are particularly advantageous for that event.

  4. Clone the event and tweak its design to make those biases even more advantageous.

  5. Use the Comparison Tab to confirm that swapping-in the new event makes the Olympics more comprehensive.

Note

This feature caters to a niche user group, since many people are too busy establishing their security to worry about their legacy. Other game platforms might be tempted to omit this feature and provide mere escapism, self-development, or advantage in winning real-world games. “What? Discover new dimensions of intelligence?” they might say, “Yeah, I’ll let someone else worry about that…”

Elevate reality above experimentation

Suppose our society were divided by competing systems of social norms. For example, the best strategy in the Volunteer game depends upon prevailing social norms which happen to correspond to the real-world norms of “turn-taking” vs “caste system” (which sometimes manifests as racial discrimination). One could benchmark those norms in redscience:

  1. Copy the top-ranked AI for the Volunteer game to a new Universe (but do not copy its curriculum). Play a turn-taking strategy against it (i.e. “You volunteered last time, now it’s my turn now.”) and confirm that it learns to take turns. Make several copies of that AI in that Universe.

  2. Similarly create a second private Universe in which you train all AI to play Volunteer via caste (i.e. “You volunteered last time, so that’s your social position, and I’ll keep the non-volunteer position.”).

  3. Copy an AI from the turn-taking Universe to the caste Universe (retaining its turn-taking experience), and confirm that it switches to the caste strategy.

  4. Copy an AI from the caste Universe to the turn-taking Universe (retaining its caste experience) and confirm that it switches to turn-taking.

  5. In the public Universe, run a Volunteer tournament with equal numbers of players copied from the caste and turn-taking Universes. Which norm survives? Similarly test other population ratios to find the minimum ratio for the other norm to survive.

  6. Observe how freedom to select social situations impacts norms by running tournaments where each reselection of players is composed of a player and their favorite opponent. Repeat the experiment where each reselection is composed of two random players plus the favorite opponent of the top-ranked player.

If we couldn’t run these experiments to our satisfaction in redscience, would we be doomed to spend our real lives serving as the subjects in such experiments (i.e. as pawns in a war between competing systems of norms)? It may be unlikely that everyone who runs such experiments will switch to whichever norm consistently wins, but the dignity of an informed loser is at least elevated compared to a pawn who never even tried the experiments.

Empower students of social science and computer science

Suppose you were a social science teacher or computer science teacher. It’s one thing to expose students to new ideas, but another thing to empower students to test those ideas for themselves. Although redscience is designed to be accessible at the secondary-education level, it is just as relevant in post-secondary education.