The so-called Cambridge University effect. Why are some jumbled words easy and others hard?
In a Nutshell
- We read jumbled words mainly by how they look, not by sounding them out first
- The more a jumbled word looks like a real word, the easier it is to read—and the longer it takes to say "that's not a word"
- Swapping the first and last letters has a bigger effect than swapping the middle ones (the "Cambridge" effect)
- Scientists found the brain areas that handle this: one that "sees" the letters, and one that compares them to words we know
- This could help with understanding and treating reading difficulties
Why Can We Read Jumbled Words at All?
You've probably seen that famous line going around the internet. Reading involves a lot of things: the shapes of letters, the sounds they make, and what words mean. So how do we still understand jumbled text? Researchers at the Indian Institute of Science (IISc) ran experiments to find out.
What Did the Scientists Do?
Volunteers were shown strings of letters and had to decide: Is this a real word or not? The results were striking. The more a jumbled non-word looked like a real word, the longer people took to answer. For example, PENICL (one letter moved from PENCIL) took longer to reject than EPNCIL, which is more scrambled. So when the letters are only slightly out of place, our brain hesitates before saying "not a word."
Two more patterns showed up. First, when letters were replaced by different ones, people could say "not a word" faster than when letters were only swapped. Second, the "Cambridge University" idea held up: messing with the first and last letters had a bigger effect than messing with the middle ones.
The Big Idea: We Compare What We See to Words We Know
The researchers proposed a simple idea. When we look at a string of letters, our brain builds a kind of visual pattern from those letters. Then it compares that pattern to the words we already know. To test this, they built a computer model made of artificial "neurons." Each neuron responded more to some letters than others. For a whole word, the model just added up the responses to each letter. No sounds, no meanings—only what the letters look like and where they sit.
This sight-only model predicted how long people took to process jumbled words. That led to an important conclusion: sound, pronunciation, and meaning don't contribute as much to jumbled-word reading as many had thought. What we see—the visual pattern of the letters—does a lot of the work.
What They Found in the Brain
When volunteers did the same task inside a brain scanner, the team could see which areas were active. They found one set of regions that seems to handle the visual pattern of the letters—how similar or different strings look. They found another area that seems to be involved in comparing what you see to the words stored in your memory (and thus deciding "word" or "not a word"). So: first we "see" the string, then we match it to what we know.
Understanding this pathway could eventually help with diagnosing and treating reading disorders, because we're getting a clearer picture of how the brain goes from seeing letters to recognising words.
What This Means for You
For parents and educators, the takeaway is that how letters look and where they appear really matters for reading. Activities that help children notice letter shapes, spot the odd one out, or tell similar strings apart are not just games—they're training the same visual system that lets us read smoothly and even decode jumbled words. At AlphaKhoj, we use these ideas to design exercises that build that visual foundation for reading.
Agrawal et al. (2020) proposed that viewing a letter string activates a visual representation that is then compared to stored words. They built a model with two principles from high-level vision: (1) perceptually similar images evoke similar neural activity, and (2) the response to multiple items is a weighted sum of responses to the parts. So they assumed neurons tuned to letter shape (estimated from visual search on single letters), and that the response to a string is a linear sum of responses to each letter, with position encoded by different weights (e.g. first letter weighted more than middle). No bigram detectors, no sound, no meaning—just letter shape and position weights.
The same model explained both tasks. In visual search, people found the oddball string among distractors; search time defined “dissimilarity” between strings. That dissimilarity matched the model’s letter-based distances. In a lexical decision task (word or nonword?), nonwords that were visually closer to a real word in this space took longer to reject: e.g. PENICL (one letter transposed from PENCIL) took longer than EPNCIL (more scrambled). So the more a jumbled nonword resembled the word in this compositional letter space, the harder it was to say “not a word.” Word response times were driven mainly by word frequency, not by this visual code.
Sound and Meaning Contribute Less Than Thought
The eLife digest states clearly: because this purely visual model predicted human performance so well, “sound, pronunciation or meaning of the word do not contribute as much to jumbled word reading as previously believed.” The dominant signal is the visual, letter-shape-based representation—which the authors localized next with brain imaging.
Where in the Brain?
During lexical decision in the scanner, the researchers compared brain activity to (1) perceptual dissimilarity from visual search and (2) semantic dissimilarity. Lateral occipital (LO) was the region where neural dissimilarity between strings best matched the perceptual (visual search) dissimilarity—so LO is the likely substrate of the compositional letter code. By contrast, lexical decision times (how long people took to respond “word” or “nonword”) correlated with activity in the visual word form area (VWFA). So: viewing a string activates the letter-based code in LO; comparing that representation to stored words (and thus making the lexical decision) involves the VWFA.
First and Last Letters, and Reading Expertise
The “Cambridge” effect—that first and last letters matter more—fits this model: the estimated weights were asymmetric (e.g. first letter weighted more). When the same letter strings were shown inverted, transposed-letter pairs (e.g. AT vs TA) became harder to tell apart, but repeated-letter pairs (AA vs BB) did not. So familiarity with upright text increases asymmetry in how letter positions are combined, making transpositions easier to discriminate when reading normally.
No Need for Special Letter-Combination Detectors
Some theories propose detectors for frequent letter combinations (bigrams, etc.). In this study, the single-letter compositional model fitted just as well for frequent bigrams and real words as for infrequent ones. So at the level of the representation that drives both visual search and nonword difficulty, there was no evidence that combination detectors are required; letter shape plus positional weighting was enough.
Implications for Reading and AlphaKhoj
The work shows that a single, neurally plausible visual code—compositional and based on letter shape—can explain both how we tell letter strings apart and how we decide if a string is a word. Training that visual representation (e.g. through visual search–like and letter-discrimination tasks) may therefore directly support reading. AlphaKhoj’s pattern-recognition and visual search–inspired exercises are aligned with this idea: building the same kind of letter-based visual processing that underlies efficient word recognition and jumbled-word reading.
Practical Takeaways
- Visual letter processing is central: Jumbled-word reading is largely explained by visual similarity in a letter-shape code, not by sound or meaning alone.
- Position matters via weights: First and last positions are weighted more, which fits the “first and last letter” intuition.
- Training visual discrimination: Activities that sharpen letter-shape and string discrimination (e.g. oddball search, letter confusion tasks) may strengthen the same representation used in reading.
Original Research
Source: Agrawal A, Hari KVS, Arun SP (2020). "A compositional neural code in high-level visual cortex can explain jumbled word reading." eLife 9:e54846.
Research from the Indian Institute of Science (IISc): Centre for BioSystems Science & Engineering, Department of Electrical Communication Engineering, and Centre for Neuroscience. DOI: 10.7554/eLife.54846
Read Full Article at eLifeBuild the Letter-Based Visual Code with AlphaKhoj
Our app includes exercises inspired by this research: visual discrimination and pattern tasks that strengthen the same compositional letter-shape processing that supports reading and jumbled-word recognition.
Download Free AppAbout this summary
This article summarizes the eLife paper by Agrawal, Hari & Arun (IISc). The AlphaKhoj Research Team translates this and other neuroscience research into practical tools for reading development.