Learning from AI

Published August 4, 2024

#thoughts #futurism

Synesthesia is a perceptual phenomenon that is naturally experienced by some fraction of the population, but it can also be experienced temporarily through the use of psychedelic drugs. The experience of synesthesia can take many forms, but is characterized by involuntary perceptions triggered by otherwise normal sensory experiences. Some examples include hearing colors, tasting sounds, and smelling shapes. There are many different types, but some are common enough to have been named. For the sake of example, let’s focus on grapheme-color synesthesia, which is the experience of colors linked to letters and numbers (or “graphemes”). Grapheme-color synesthetes have an experience of a particular color as part of the stimulus of seeing a particular grapheme. The experience is consistent, so for example, seeing the letter “A” may evoke the color red. However, while some grapheme-color pairs are more common, there is no inherent reason for any pair to be more or less common, and no fully consistent pairing scheme across individual synesthetes.

This presents an opportunity, assuming synesthesia can be learned. In the same way that a writing system based on a commonly understood alphabet transformed our communication abilities, a shared experience of synesthesia layered on top of existing writing systems could transform their potential. If it were possible to teach a standardized grapheme-color alphabet to a population, it would undoubtedly open new possibilities for communication and societal development. Taking the opportunity further, standardizing and inducing other types of synesthesias as a set could have profound effects on the expansion of human consciousness.

Can synesthesia be learned? Research is limited, but some experiments have been conducted and seem to suggest that it may be possible for adults to at least temporarily acquire some synesthetic grapheme-color associations. There is some evidence to suggest that there is a genetic component involved, but also that synesthesia may be somewhat dependent on the character of formative experiences early in life. In attempts to induce long-term synesthesia, two key pieces have been absent or lacking. Although attempts were made to apply a consistent grapheme-color experience to build associations, it was impossible to ensure complete adherence for the subjects' experiences when exposure to unaltered graphemes is ubiquitous in everyday life. Additionally, if early-life developmental stages are critical to permanent synesthesia, the subjects missed the window.

Leaving aside the plethora of ethical and practical questions, having an in-line device to modify or augment perception in real time would solve the issue of experiential inconsistency. If every time a person saw the letter “A” the resulting perception was accompanied by the device evoking the color red alongside it, it stands to reason (in my completely unqualified view) that the neural pathways to link the two concepts would form eventually. If implemented early in life, the key window for impressing such associations would be maximized. Currently, no such device exists, but with significant advancements in AI and brain interface technology it may eventually be possible to construct a system capable of recognizing inputs and producing accompanying secondary inputs in the user, essentially producing simulated synesthesia. It is easy to see how the device would become unnecessary after synesthesia has been established during an initial training period, leaving the target perceptions unaltered thereafter.

While synesthesia is a fun and interesting example to consider for possible applications of AI and brain interfaces, more conventional objectives of education would also benefit. Learning can be tricky, but the rate at which a student is able to learn is largely dependent on the skill of the teacher and the amount of time that can be spent in individualized instruction sessions. In practice, it isn’t realistic for everyone to have individualized instruction, but even in the fortunate circumstances where it is possible, there are still serious shortcomings that limit information transfer. The best teachers are able to efficiently ascertain why students are struggling with a concept and generate creative ways to help them zero in on the key component, but this is easier said than done.

As long as we’re still mostly using wet-ware, it seems unlikely that the near future will allow us to download files directly to our brains and say “I know kung-fu,” but maybe we can have the next best thing. Teaching can be crudely thought of as programming the meat computer that is the brain, but teachers are blind to much of what happens in the mind of the learner. An imaginary telepathic teacher would presumably be able to provide some explanatory input to a student and observe how the student processes the input, resulting in a much higher-fidelity feedback loop. An AI teacher with a brain interface that is able to observe the inputs as they are received by the learner, the processing of those inputs, and the results of the processing would be able to efficiently conduct experiments to determine the most effective way to adjust the inputs in order to achieve the optimal outcomes. In other words, the perfect teacher would be able to personalize explanations for a student to “make it click.” Assuming a factual and unbiased curriculum could be constructed, it should be reasonable to expect that minds equipped with a teacher would absorb information with high accuracy at a very high rate.