The exact origins of Misirlou are a bit murky, but it’s believed to have emerged as a kinda folk tune from the Eastern Med sometime in the early 1920s. The song’s title is Turkish and roughly translates to ‘Egyptian Girl’. The earliest known recording was made in 1927 by Theodotos Demetriades, a Greek-American musician but most of us know it from Dick Dale’s blistering 1962 proto-punk psycho-surf-abilly theme to Quentin Tarantino’s 1994 genre-redefining Pulp Fiction (1994) movie.
I read somewhere that Tarantino picks the soundtrack before writing out the screenplay. He begins with a dig around his record collection, searching for songs that fit the feel of his film idea. He places emphasis on selecting the opening and closing credit tracks early in the development process, as these musical bookends serve as anchors for the film’s rhythm.
This music-first approach is fundamental to Tarantino’s creative process, influencing the overall tone, pacing of the thing, and character development. He would sometimes even create mixtapes of tunes that his characters would listen to, using the music as a tool to flesh out their personalities. (If you’re hiring and it’s hard to choose between two candidates, equal on every level, then asking them to provide a list of their top five albums of all time can work as a tiebreaker.
On one occasion, I had a good candidate in every other respect who totally blew it by offering up a bloody Sting album on their list. No job for you, buddy.) Tarantino’s tuneage often inspires entire scenes. He’ll maybe use the anachronistic pairing of songs from different eras with historical settings, and juxtapose upbeat tunes with intense or ultra-violent scenes, like the Stuck in the Middle with You scene in Reservoir Dogs.
I’ve not dug into this, but I suspect that Tarantino was heavily influenced by Kubrick in this respect. The infamous Singin’ in The Rain sequence in A Clockwork Orange (1972) is a salient example.
Tarantino’s music-first method has not only become a signature element of his brand, but I’d wager it has influenced a generation of directors — so much of the content at the quality end of the various streaming platforms seems to be paying much more attention to curating a soundtrack of ‘ready-made’ tunes. Before watching even a second of AppleTV’s Bad Sisters I was already primed to love it; as the opening credits were accompanied by PJ Harvey’s awesome interpretation of Leonard Cohen’s Who By Fire.
Music does something to us, physiologically
It’s a universal language that transcends cultural boundaries, orchestrates neural activity far beyond mere auditory processing, taps into the deepest recesses of our minds and unlocks feelings and cognitive potentials that words alone cannot reach. But why is this the case? And does this explain why strategic use of music in advertising might be a vastly underrated effectiveness multiplier. I was casually driving into town listening to some psychology podcast. The guest was a musicologist, and amongst other things he talked a bit about how the human mind processes language differently, when the words are spoken in a conversation or sung in the form of song lyrics. My initial thought was ‘What about Lou Reed? Or Mark E Smith? Does that confuse my mental modules?’
Anyway, the example they discussed was the Queen thing Bohemian Rhapsody. There’s a line where Freddie anguishes ‘I sometimes wish I’d never been born at all’. The music science fella explained how in a conversation, that sentence would trigger an immediate, practical response. Our brains are wired to prioritise the survival and social dynamics of interactions. If overheard, or if someone said this to you in conversation it would likely be interpreted as a serious expression of some kind of emotional distress, probably activating concern. This reaction stems in part from the purpose of language, which likely evolved in humans for real-time problem-solving and social negotiation. Words spoken are taken at face value to a degree, and their meaning is tied to the immediate context. When heard as part of a song, however, the same phrase will be processed in a much more layered and abstract way. Songs operate on a different psychological and emotional level, allowing them to be interpreted metaphorically or artistically rather than purely literally.
(As I was stroking my chin at this insight another bizarre thing happened. Siri, listening in, interrupted the podcast and asked me if I was contemplating suicide and offered to call up a helpline… surveillance FTW, but that’s a thought for another article.)
Talking is more practical
Its main purpose is to transmit information quickly and efficiently. It evolved to solve critical problems like warning about predators, coordinating hunting efforts, or navigating social hierarchies. It’s no coincidence that conversation thrives on brevity and clarity — the very attributes that kept our ancestors alive in moments of danger or opportunity.
The mind processes language dynamically, decoding sound patterns into meaning almost instantaneously. Syntax, context, and intent are evaluated on the fly to ensure the message lands, and to ensure the response is equally sharp and relevant. But where conversational speech aims to exchange information, songs aim to exchange feelings. Their repetitive structures, rhymes, and melodies aren’t just aesthetic choices, they’re mnemonic devices — essentially creating associations that aid in the retention and reconstruction of information by associating it with something easier to remember.
Evolutionarily, the song may have even come first and spoken language is a by-product. There’s compelling evidence that early humans used rhythmic and melodic vocalisations long before they developed structured speech. These proto-songs likely helped our ancestors bond and coordinate. And attract mates, obviously (plus ça change, plus c’est pareil…). Rhythm and melody weren’t about exchanging information but about fostering connection and cohesion — a critical advantage in a world where survival depended on the strength of the group.
Among the earliest examples of song is the so-called motherese, the ancient sing-song way in which mothers communicate with babies, that seems to be universal across cultures. Early humans likely used song-like communication to soothe, signal, and bond long before words evolved to express specific ideas. In this sense, music was humanity’s first language, one that prioritised emotional resonance over factual precision.
As we’ve said earlier, the repetitive structures in songs are not just aesthetic but serve as powerful memory aids
Evolutionarily, repetition ensured that critical information — like oral histories or communal rituals — was passed down reliably. (Jingles use this same principle, embedding brand messages through repeated exposure in a catchy format.) And the rhythmic and melodic elements of songs tap into the brain’s natural affinity for patterns.
We’re lost in music, caught in a trap. No turning back.
If music was humanity’s first language, perhaps it’s time advertisers remind ourselves to give the tune its proper status and significance. As the renowned philosopher Madonna Louise Ciccone correctly put it, ‘Music makes the people come together’; and it’s a combined multi-sensory experience that drills right into the brain’s dopamine reward system.
When done right, music doesn’t just accompany a message — it becomes inseparable from it, amplifying its impact and wrapping it tight the audience’s memory. Science is real.
And in the battle for attention and memory, the right selection of tune or the branded jingle — sneaking into the back door of your brain even when you’re not really listening — might still be one of the sharpest tools we’ve got.
Featured image: Sister Sledge, Lost In Music (1979)