In the vast landscape of artificial intelligence, where the pursuit of replicating human-like cognitive abilities is a constant challenge, a new and innovative approach emerges — the Cincinnati Algorithm of Learning. This breakthrough, akin to John Conway’s “Game of Life,” introduces a novel strategy for species to navigate the uncertain terrain of evolution and successfully propagate through an intriguing game called “The Intelligence Evolution Casino.”

Much like Conway’s rules for “Game of Life,” where the behaviors of a population are governed by simple genetic laws, the game involves species facing a casino dealer armed with a deck of red and black cards. The objective is to guess the color of the next card, and the success or failure of these guesses determines the fate of the species in the evolutionary journey.

The Intelligence Evolution Casino is a thought experiment. It’s an attempt to break down the task of learning to its core — to use logic and simulation to determine how a primitive species with no brain may have taken its first steps in its evolutionary expedition to learn and to make predictions. The Casino has led to an interesting discovery that may be a key to future advances in Artificial Intelligence.

The Game Rules

To determine its evolutionary fate, a species must face Harambe, who is a card dealer at a Cincinnati casino. Harambe has a deck of cards, face down. The species doesn’t know the makeup of the deck (the ratio of black to red cards), nor the number of cards in the deck.

The species must utilize a strategy to guess whether Harambe’s next card will be Red or Black. At each “hand”, the species will guess, and then Harambe will show the card, determining if the species guessed correctly. Then the card is inserted back in the deck and shuffled.

Card counting is prohibited — the primitive species does not have the capability to count. And Harambe doesn’t like card counters.

The species will play some predetermined number of hands (for example, 10,000 hands), and if they don’t win at least one third of the hands, then they are evolutionarily not intelligent enough to survive as a species. If they win between one third and two thirds, then they will barely eke by, without thriving — they’ll pass their genes to the next generation of the same number of organisms, with no population growth. And if they win two-thirds or more, then they will thrive, reproducing at increasing rates.

Strategies

Three simple strategies come to mind, and a fourth has been discovered, which combines randomness with memory, arriving at superior outcomes.

  • Always Guess Red (Strategy 1): A simplistic strategy where the species consistently chooses red may lead to extinction if the deck is predominantly black. This one-dimensional approach lacks adaptability and resilience.
  • Random Guessing (Strategy 2): A strategy where the species randomly guesses red or black ensures minimal growth, hovering around the breakeven point without substantial population increase.
  • Memoryless Random Guessing (Strategy 3): This strategy involves guessing the color of the first card and then consistently guessing the color of the card that was just seen. While this strategy offers stability, it lacks sophistication.
  • The Cincinnati Algorithm (Strategy 4): This groundbreaking technique outshines its predecessors. The species begins by guessing the color of the first card. After each subsequent card, the species uses a set of 2-colored chips, flipping them randomly and placing them in pairs. When a mismatch is found, a specific procedure is initiated to adapt the chips preparing for the next guess. This method demonstrates a remarkable ability to adapt to changing conditions.

Details of The Cincinnati Algorithm

The key innovation of The Cincinnati Algorithm is the pairing of a random event with a memory bit. Information about the previously observed cards is retained utilizing casino chips which are red on one side and black on the other. And random events are generated using the same style of 2-color chips.

The Mathematics of Learning

The ability to observe a situation and quickly absorb information and make predictions is critical to intelligence. But the brain’s mechanism to do so is not widely understood. For example, if you were to watch Harambe flashing a series of cards from a deck that was predominantly black cards (say two-thirds black), after a short period you’d be able to infer that the deck was mostly black. But try to explain the mechanism for that inference. You’d find yourself saying things like “well, I can just tell”, or “it’s obvious”.

It’s not likely that your brain is counting the red cards and counting the black cards, and figuring out a ratio. Rather, it uses some previously unknown mechanism to estimate the ratio. Your brain can accurately tell the difference between decks that are “almost all black” and “substantially black” and “about half black and half red”, and “substantially red” and “almost all red”, without much effort whatsoever.

Understanding this mechanism could provide an advancement in the understanding of the biological brain, as well as enabling tremendous advancements in artificial intelligence systems.

Why the Cincinnati Algorithm works

The Cincinnati Algorithm stores a representation of the previously shown cards in a manner that is quickly usable for making the next guess. The chips on the left aren’t storing the colors of every card that was seen. Rather, the chips store a summary of what was observed.

In the summary, the chip closest to Harambe represents a best guess at which color was seen the most — and therefore provides a good guess for the next card. The second chip refines the summarization, by either reinforcing the assertion that the color of the first chip is predominant in the deck, or pulls back on it. The third chip refines the assertion of the first two, and so on.

The summary converges to be effectively a binary search toward the true mix of red and black cards in the deck. Prior to the first card being displayed, no summary chips are used, meaning that our best guess is a 50/50 toss-up. But once we have a chip on the board, that means we’ve seen at least one card, and we’re tracking a best guess at the predominant color, so our binary search moves half way — to 75% red or 75% black. Note that this is just a short summary of what has been seen, and with a small number of chips, it may vary from the actual ratio of what’s been seen. Over time, however, this will converge to represent the actual mix of the cards in the deck.

The second chip refines the binary search: with one chip, the summary will represent leaning toward one color as predominant, estimating that it makes up three-fourths of the deck. The second chip refines the estimate to either be half way back to 50/50 (i.e. five-eighths) or half way further toward 100% (i.e. seven-eighths). In this way, the longer the column of tracking chips, the more refined the estimate.

With each card displayed, the species can modify their tracking chips in one of two ways, or they can leave them alone. With step 5, the column of tracking chips is extended by one chip. This happens frequently when there are fewer tracking chips on the board, and less frequently with greater numbers of chips, because the extending only happens when all pairs matches. It becomes exponentially harder to randomly match the pairs, the more pairs that there are.

In Step 7, the tracking chips are tweaked in the slightest manner possible, by turning over the “lowest order bits” until a chip is turned over so that it matches Harambe’s card. This is effectively “adding one” or “subtracting one” in the binary search to find the actual ratio of the mix of cards.

Intuition as to Why this Works

As the column of tracking chips grows, it becomes less likely that every pair will match. In fact, there’s a 50% chance that each pair won’t match. Considering the pair closest to Harambe, the 50% chance it won’t match means that we’ll consider tweaking our summary half the time based on that pair alone. But from there, we’ll only tweak it if the random chip matches Harambe’s card — and the odds of that are based on the make-up of Harambe’s deck.

In other words, if we’re expecting to see a red card, and it comes up red, then it’s unlikely that we’ll make an adjustment — things happened as expected. But if the unexpected happens, then we usually tweak our tracker, to refine our expectations. And the unexpected will happen more frequently if our expectations are wildly wrong, and less frequently if our tracker is accurately representing the mix of card in the deck. This leads to a convergence of the tracker onto the actual mix of cards in the deck.

Performance of the Various Strategies

Strategy 1 was to simply guess red every time. Using Monte Carlo simulations, we can test the performance of this strategy. These graphs represent 20,000 randomly selected mixes of card decks, each generating a pixel on the graph. The color of the pixel represents the overall outcome of 10,000 random hands using the strategy with that particular mix of cards.

On the graph, red-yellow-green scale indicates the winning percentage of guessing correctly. In the chart below, the upper left corner represents decks of cards which had a high number of black cards, and a low number of red cards. Unsurprisingly, guessing red is a poor strategy if Harambe happens to have a deck of that mix.

Likewise, the lower right portion of the graph shows green, where a strategy of always guessing red performs very well in predominantly red decks.

The strategy of always guessing red has inconsistent success, and is highly environment dependent. In biological evolutionary terms, a species that has a rigid prediction capability might thrive in some environments, and completely fail in others.

The second strategy is to guess by random selection. This provides an average performance, and in the long run would be a break-even strategy.

The third strategy, always guessing the previous card, performs well with decks that are predominantly one color or the other, and performs break-even with decks that are mixed fairly evenly.

Finally, the Cincinnati Algorithm performs the best of all four strategies. With decks that are even slightly weighted in favor of one color, the Cincinnati Algorithm is able to react to the mix, and predict based on the knowledge learned through observation.

The Evolutionary Advantage

The Cincinnati Algorithm introduces an element of adaptability, a form of rudimentary learning without explicit memory. By utilizing a simple mechanism of chip flipping and adjusting, the species employing this strategy gains an edge in consistently making correct guesses. This nuanced approach not only ensures survival but also propels population growth beyond what other strategies achieve.

Is this a Big Deal?

Current Artificial Intelligence techniques and models for learning are extremely complex, sometimes requiring rooms full of servers burning megawatts of power, and advanced degrees to understand and perfect. They are pushing against the upper boundaries of what is physically possible, given chip density.

The Cincinnati Algorithm can be easily described in 12 cartoon pictures honoring Harambe the Gorilla, may he rest in peace.

Is this a big deal? We think so.

Simplification of the learning model and algorithm may lead to tremendous advances in artificial intelligence, including higher performance, lower cost, lower power consumption, and greater portability.

In future blog posts, we’ll explain how.

The Path to Artificial General Intelligence

As the Cincinnati Algorithm proves its efficacy in the Intelligence Evolution Casino, the question arises: Could this be the key to Artificial General Intelligence (AGI)? Brain-CA Technologies, the pioneering force behind this patent-pending method, asserts that the answer is yes.

This innovative technique lays the groundwork for developing AGI systems that can adapt, learn, and make decisions in dynamic and unpredictable environments. The Cincinnati Algorithm provides a blueprint for incorporating adaptability and resilience into artificial intelligence, unlocking new possibilities for machines to navigate complex scenarios with the finesse and efficiency of biological intelligence.

The patent marks a significant milestone in the quest for AGI. With the Cincinnati Algorithm, we may be on the verge of designing artificial intelligence that rivals the cognitive abilities of the human brain. The future of intelligent machines is evolving, and the Casino offers a glimpse into a potential paradigm shift in AI development.