In the world of computer science, there is a long-standing joke that the ultimate test of any new hardware is whether or not it can run the 1993 first-person shooter, Doom. Recently, a company called Cortical Labs achieved exactly that, but their hardware was not made of silicon - it was made of roughly 200,000 living human brain cells.
This is not simply a quirky technological stunt. It represents a fundamental shift in machine learning, moving away from artificial neural networks and toward Synthetic Biological Intelligence (SBI). Here is a look under the hood at the architecture of a biological computer.
The Pong Precedent
To understand the significance of the Doom experiment, we have to look at the baseline established by Cortical Labs' previous work. In earlier iterations, researchers proved that a monolayer of living cells (dubbed "DishBrain") could be taught to play Pong.
While impressive, Pong is fundamentally a 2D game of predictable physics with a direct input-output relationship. The critical academic takeaway from that earlier experiment was sample efficiency. When pitted against state-of-the-art Deep Reinforcement Learning algorithms (like DQN or PPO), the biological neurons adapted and learned the game significantly faster than the silicon-based AI, requiring far fewer training episodes to improve their hit-to-miss ratio. The biological cells achieved this while consuming a fraction of the power required by traditional computing.
The Complexity of Doom
Upgrading the system from Pong to Doom is a massive structural leap. Doom is chaotic. It is a 3D environment that requires spatial exploration, threat identification, and dynamic reaction. The engineering challenge is complex: how do you feed a 3D environment to a petri dish of isolated cells?
The Hardware: The CL1 Array
The computing module that makes this possible is the Cortical Labs CL1 - a high-density multi-electrode array housing the living human neurons. To bridge the gap between the digital game engine and the biological tissue, researchers had to translate the visual data of the game into the native language of the brain: electricity.
Using a custom API (application programming interface), an independent researcher mapped the game’s video feed to specific patterns of electrical stimulation across the chip:
Sensory Input: When a digital enemy appears on the left side of the screen, the system sends an electrical pulse to the specific electrodes located under the left sensory region of the neuronal culture.
Motor Output: The system then listens for the biological response. As the neurons react to the localized stimulus and fire, the array detects these action potentials (spikes).
Execution: These biological spikes are translated back into motor commands within the game engine. A specific firing pattern commands the character to shoot, while a different spatial pattern commands him to turn or move right.
The Takeaway
While the cells are currently playing like a novice who has never seen a computer, they are actively demonstrating the ability to seek out enemies, spin, and fire. More importantly, because of the newly developed API, this complex translation of 3D data into biological electrical signals was coded and implemented in less than a week.
By harnessing the innate, self-organizing properties of living neurons, we are witnessing the early stages of biocomputing. It is a pathway to achieving real-time, adaptable learning that currently eludes even the most advanced, power-hungry deep learning models on the market.
Comments
Post a Comment