July 6, 2020

Meta Model Revisited: The Real Structure of Magic


Want the PDF version of this post?

Get instant access for offline viewing!


Back in April 2019, I wrote a post on the Meta Model. Although much of the information is still valid, I had made an error in regards to the linguistic theory behind the Meta Model. 

In this post, I’m attempting to rectify that mistake. However, I’m not the only one who made this error. 

When I had first written about the Meta Model, there was a lot of misinformation and many people were throwing around the same outdated ideas. I pulled what I thought was the best of these ideas together and wrote what I believed was a detailed explanation behind the Meta Model. 

It is my personal opinion that a person is allowed to be wrong as long as they learn from their mistakes. And that’s exactly what I did. My understanding of the Meta Model was vastly expanded thanks to a certain gentleman named Eric Robbie. 

Eric Robbie has been hailed as the “father of British NLP” and has written much on the subject of NLP. 

It was thanks to him and his research that I was able to gain a deeper understanding of what the Meta Model is about and the order principle behind it. 

His work on the Ordering Principle of the Meta Model will be cited throughout this post as a debt of gratitude. 

With that being said, let’s dive in. 

Note: All diagrams that appear in this post are credited to Eric Robbie unless otherwise stated.

History of the Meta Model

The Meta Model made its original appearance in the Structure of Magic Vol. 1, written by Richard Bandler and John Grinder, the original co-founders of NLP. 5 years after it was published, Steve Lankton created the first attempted ordering of the Meta Model. It was featured in the appendix of his book Practical Magic.

Before Lankton, Robert Dilts created an indirect ordering of the Meta Model. In his rendition, he sorted the Meta Model into 3 common-functional groups:

Eric Robbie created his first ordering of the Meta Model based off the works of Lanktons’s, Dilts’, and Cameron-Bandler’s ordering (which appeared in the appendix of Happily Ever After, 1978):

Eric Robbie’s ordering ended up catching on and was passed down to several generations of NLP Practitioners. By 1986, it was widely circulated throughout Europe but hadn’t made its way to the United States. About 2 years later, Eric had slightly modified his original ordering by moving the Lost Performative to the other side of the “operates on” line. 

He also began to see presuppositions as a separate class of operator and moved it south to show that it operated on the model as a whole. 

In a conversation between Greg Gibson, who was writing the review for the Structure of Magic I, and Richard Bandler, Richard stated that the Meta Model is presented in the inverse order of how one should use it. 

Shortly afterward, NLP Trainer Tad James turned his list upside down so that items such as Cause and Effect (CE) and Complex Equivalence (CEq) were at the top of the page, and deletions items were at the bottom. However, he did not offer any rationale for this ordering.

The official ordering of the Meta Model came out in 1987 by Eric Robbie. It was during this time that he presented his ordering for the first time in the United States. Since then, it has become the standard in the field of NLP.

X-Bar Theory Crash Course

In my previous post on the Meta Model, I mentioned how John Grinder and Richard Bandler make a distinction between deep structure and surface structure. These 2 terms originate from Transformational Grammar, which was initially formulated by Noam Chomsky back in 1965. 

Transformational Grammar is a system of language analysis that recognizes the relations among various elements of a sentence and among the possible sentences of a language and uses processes or rules (some of which are called transformations) to express these relationships. (Source: Brittanica)

Surface structure is the sentence in the form that it is heard or written. 

Deep structure is an abstract representation that identifies ways that a sentence can be analyzed and interpreted. 

Here’s a quick example to see how that looks: 

Surface Structure: I know a man who flies planes.

Deep Structure: I know a man. The man flies airplanes. 

It didn’t take long for problems to emerge with this theory. And they were already well-known when the Structure of Magic I came out. 

Here’s the most obvious problem, linguistically speaking: How do you make the nominalized version of a phrase or sentence without breaking all of the “Deep Structure” rules?

To illustrate this, here’s an example taken from Horrock’s “Bounding Theory and Greek Syntax”:

Sentence 1: Bresnan criticized Chomsky

Sentence 2: Bresnan’s criticism of Chomsky. 

We can see that both of these sentences have the same deep structure. Unfortunately, it is nearly impossible to write out rules that track each step from one sentence to the other.

 Here’s a few questions that need to be answered: 

  1. How do you get from Bresnan to Bresnan’s, with an apostrophe?

  2. How do you get from criticized, the verb, to criticism, the noun, singular, when it’s possible that Bresnan could’ve had 2 or more criticisms of Chomsky?

  3. Let’s say you manage to go from “criticized” to “criticism”. You still have to put an “of” in front of it. Where does that “of” come from? You can’t just make it up or pull it out of the sky. 

To make matters worse, there are at least 6 or more ways to make a nominalization out of a verb in the English language.

Which is the right one for any given verb?

  • Do we add an “-ance” as in “performance”?

  • Do we add an “-ment” as in “bewilderment”?

  • Do we add an “ism” as in “criticism”?

Then there’s the preposition. 

If you have to add one, how do you know which one to use?

  • Do you add a “to” as in “marriage to”?

  • Do you add a “for”, as in “proposal for”?

  • Do you add an “of” as in “criticism of”?

Even Chomsky later admitted that it’s hard to draw a general rule when there is no regular pattern for forming the nominalized word. 

 In short, there are 2 major problems with the Standard Theory: 

  1. There were many instances that didn’t follow transformational rules. 

  2. There were instances where Deep Structure couldn’t completely give rise to the meaning of the sentence, the way it should. 

So, what’s the solution? 

Instead of thinking of the individual words in sentences, it made more sense to group the words into phrases. Those phrases acted as “language modules” which could be plugged in and out of sentences. This helped to pave the way for various solutions.

In Steven Pinker’s book “The Language Instinct”, he talks about this at length. 

Modern linguistics has shown there to be a common anatomy in all the world’s languages.

First, there’s the noun phrase, or NP for short. It’s named after the noun that appears in the phrase. 

As you would expect, the NP owes most of its properties to that one noun. 

Here’s an example of a noun phrase (credits to Steven Pinker): The cat in the hat

In this example, “the cat in the hat” refers to a kind of a cat, not a kind of hat. The cat is the core meaning of the whole phrase.  

This special noun is also referred to as the “head” of the phrase. 

Verb Phrases, or VP for short, follow a similar pattern. 

Here’s an example (credits to Steven Pinker): Flying to Canada before the police catch them. 

In this phrase, we are talking about the “flying” as opposed to the “catching”. 

This leads us to the first principle which is that the entire phrase is about what its “head” word is about. 

The second principle states that sets of players interact with each other in particular ways, each with a specific role. 

Here’s an example sentence to illustrate this: 

Sergey gave the documents to the spy. 

In this example, we are not talking about any act of giving. There are 3 entities to account for: Sergey (the giver), the documents (the gift), and a spy (the receiver).

Role-players are formally referred to as arguments, in a logical sense. 

Noun phrases can assign roles to one or more role-players. 

The head and the role-players are joined together in a sub-phrase, which is smaller than a NP or VP. 

The standard terminology is called N-bar or V-bar, named after the way they are written.

Here’s an example of an N-bar:

Note: the PP stands for Preposition Phrase. 

The third ingredient of a phrase is one or more modifiers. 

Take the phrase “man from Illinois” as opposed to “governor of California”. In the latter phrase, in order to be a governor, you have to govern something. This is where the “California” part comes into play. “From Illinois” is just an extra piece of information to help identify which man we’re talking about. 

The distinction between role-players and modifiers dictate how the phrase-structure tree looks. To elaborate, the role-player stays next to the head noun inside of the n-bar, but the modifier goes in a separate branch but stays within the NP “house”.

Here’s what that looks like:

What’s true for n-bars and noun phrases is also true for v-bars and verb phrases. 

Here’s an example (credits to Steven Pinker): 

Sergey gave those documents to the spy in the hotel. 

“To the spy” is one of the role-players of the verb give. After all, there’s no such thing as giving without a getter. 

“To the spy” lives with the head verb inside of the v-bar. 

“In a hotel” is a modifier that is kept outside of the v-bar, but is still in the VP. 

We can say “gave the documents to the spy in a hotel” but not “gave in a hotel the documents to the spy.” 

The fourth component of a phrase is called the subject, or as linguists call them, SPEC. The word SPEC is short for specifier. 

The subject is a type of role-player that usually acts as a causal agent (if one exists). 

Here’s an example (credits to Steven Pinker):

“The guitarists destroy the hotel room”

“The guitarists” is the subject or the causal agent where the hotel room is destroyed. 

Noun phrases can also have subjects. 

For example, “The guitarists’ destruction of the hotel room”

When we use this structure, the nominalization problem is solved because there’s no need for a transformation. Also, “tracking each step” is covered by a common anatomy. In other words, there’s no change whatsoever. In addition, the choice of ending (e.g. -ance) or linking the appropriate word (e.g. by) has been relocated to the lexicon. 

As a quick re-cap, here are the similarities between noun phrases and verb phrases: 

  • A “head word”, which gives the phrase its name and determines what it’s about. 

  • Some role-players, which are grouped with the head inside a head phrase

  • Modifiers, which appear outside of the N-bar or V-bar. 

  • A subject, also known as a SPEC

  • The same ordering i.e. the noun (or verb) comes before its role-players

The same rules apply for Prepositional Phrases (PP) and Adjectival Phrases (AP)

An example of a PP would be “in the hotel” where “in” is the head word. 

An example of an AP would be “afraid of the wolf” where “afraid” is the head word. 

The foundations of X-bar theory was laid by Chomsky back in 1970 and was added to by others over the next couple of years. 

The bulk of the theory was already in existence by the time The Structure of Magic I came out. 

The Meta Model Decoded

It’s time to get into the good stuff. Let’s talk about how the Meta Model relates to the X-bar theory we just described above. 

To do that, we’ll be breaking down the Meta Model into 4 discrete sections and talk about them one by one.

Deletion Patterns

As a quick note, the first group of Meta Model patterns do not all use the same mechanisms.

Deletion can happen in one of two places: somewhere in the journey from having a complete experience to forming a complete sentence, or somewhere in the journey from forming a complete sentence to offering spoken words. 

Here’s a diagram, courtesy of Eric Robbie, which helps illustrates this: 

It should also be noted that the Structure of Magic I do not classify Lack of Referential Index and Incompletely Specified Verbs as examples of deletion. Instead, they were given as examples of generalization. Later on, writers classified them as deletion phenomena. 

Each of these patterns can be analyzed with respect to the 4 roles of the x-bar form. 

There’s also about 4 different meanings for the terms “operates on”. The goal of this post is to coalesce them into one.

Want the PDF version of this post?

Get instant access for offline viewing!

Unspecified Verb 

Here’s an example sentence: John gave the book to Mary. 

According to X-bar theory, the verb becomes the dominant force in the sentence. It also determines the kind of role-players, and how many, it expects to be surrounded by. 

In this example, our verb is “gave”. This implies a giver (John), the thing being given (the books), and a receiver (Mary). The choice of the verb is crucial to whatever’s going on. 

In terms of ordering, both structurally and lexically, the choice of the verb “operates on” all of the other Meta Model patterns in the group. 

When we change the verb by either refining its range, or improving its accuracy, we also affect the SPEC and all the role-players as well.

Unspecified Referential Index - in the specifier of VP

After the verb, the next important thing is the subject of the sentence, also known as the specifier or SPEC for short. 

Even when the subject is present, it is usually referred to, but not pointed at, or out. 

In practical terms, a sentence like “John gave the book to Mary” becomes “He gave the book to Mary.” This results in a sentence which does not have a meaning based in the real world. 

According to Theta criterion, there should be one and only one occupant in each key role.

Theta criterion is a constraint on X-bar theory. It was originally proposed by Noam Chomsky, and is used to determine the specific match between arguments and theta roles (θ-roles) in logical form. 

Logical Form (LF) is a level of representation that sits next to Deep Structure and Surface Structure. It was originally created when linguists were trying to solve the problem of quantification. 

Depending on which verb we choose, the range of who or what the subject can be has been fixed. In short, the main verb operates on Lack of Referential Index. 

Eric Robbie recommends that Unspecified Referential Index should be Lack of Referential Index and Unspecified Verb should be Incompletely Specified Verb because SPECIFIER and specified have new, different usages. 

Unspecified Referential Index - in any of the role-players

If a role-player has not been omitted or left out, it is usually replaced with either a pronoun or pronominal. 

Here’s an example: Jack gave it to Mary

The previous argument for URI is the same as this one. To gain clarity, you would simply ask something along the lines of “gave what?” or “to whom?”

In terms of ordering, Unspecified Verb operates on Unspecified Referential Index.

Simple Deletion

The argument for Unspecified Verb and Unspecified Referential Index also applies here. As I’ve said before, the verb determines which noun phrases can be picked as role-players. 

When a deletion occurs in therapy, it usually involves what’s called a “state-change” verb or state-change adjectives. The stress is generally placed on the person’s emotional reaction and all the detail after that is usually omitted. 

This typically takes the form of “I’m hurt...” or “I’m angry…”. 

According to Eric Robbie, since these are really the verb “to-be”, but a state-description adjective, the role-players involved are those which each adjective requires. Going back to the previous example, if the person says “I’m hurt”, then this should be followed up with a Noun Phrase (e.g. by your action) or an S (at what you did). 

Note: S stands for Sentence. 

In terms of ordering, the main verb operates on any role-player. 

Comparative Deletion

When the Model was first introduced, Comparative Deletion was thought of as part of a larger sentence. Specifically, there was a part of the sentence that was neither visible nor audible upon first hearing or first sight. 

Here’s an example sentence: Diana is much better now | than she was yesterday

In this instance, the speaker was judging Diana against some basis of comparison. To challenge the utterance, you would explicitly ask for a basis of comparison. In linguistics terms, this is known as the “comparative”. 

Since we’re using X-bar theory, we no longer need to do that and can just focus on the words used in the x-bar phrase. We can even lump together absolute adjectives (good, fine), comparative adjectives (better, happier), and superlative adjectives (best, ultimate) and call them members of the modifier class. 

A distinct advantage behind doing this is that you don’t have to wait for a person to use a comparative to ask them what their standards are. 

Here’s another example: 

Person A: This is a good idea.

Person B: Good? Compared to what?

Modifiers are selected based on the X-bar item which they modify. In other words, whatever NPs acting as a SPEC or role-player(s) they’re attached to. 

Both Unspecified Referential Index and Simple Deletion operate on any modifier that would be employed. If a role-player isn’t present, you can’t insert a modifier on either side of it. 


Nominalization is a “catch-all” for any and all patterns we’ve discussed so far. 

Here’s an example sentence: We have to improve our communication. 

When we break this sentence down, The question to ask is “Who is communicating what to whom?”

In the original sentence, there is an entire second sentence turned into a single word, and then “frozen” and tucked away inside of the first one. 

This means that the nominalization already contains a verb. We reveal the verb by taking the original head noun (i.e. communication) and turning it into a verb (i.e. communicating).

We also have a choice of which verb to respond to: 

Choice 1: “have to” which operates on improve

Choice 2: “improve” which operates on communicating

Choice 3: communicating itself

Most accounts of the Meta Model only suggest the last .

Within the inner sentence, nominalization operates on everything else. In the outer sentence, the verb head “improves” operates on “communication”. 

Generalization Patterns

As the name implies, these Meta Model patterns deal with generalizing, or forming broad conclusions based on limited data. 

Universal Quantifier

Sentences that begin with all/always, every/none, and every time/never, usually show in relation to people and things or with actions (i.e. how often a verb is occurring).

Here’s an example sentence in relation to people: All my friends lie to me

If we were to write this in standardized logic, this would have a “for all x” form which looks like the following: 

For all x: if x is my friend, x lies to me

In this instance, x is a variable and could take on any value from a range. 

“For all x” represents the quantifier and “if x is my friend, x lies to me” is the proposition which gets quantified.

It’s important to note that the quantifier is at the “front” of the sentence in both the “natural language” version and the “professional logician” version. The word “all” affects the whole sentence. 

When the Universal Quantifier is added to a verb, it tells you about actions and processes, and focuses on time instead of quantity. 

Here’s an example sentence (credits to Steven Pinker): Glasgow Rangers always win the League cup comfortably.

In this example, the word “always” modifies the whole sentence. 

According to Eric Robbie, the “quantifying effect” of the word always wraps itself  around the VP, and then around the whole sentence. 

Here’s how a professional logician would phrase this sentence: 

For all x: if x is (the team called) “Glasgow Rangers”, x wins the Scottish League

In this instance, “for all x” not only means for every instance, but for every instance through time, or for all times. 

The logical form for actions and processes is the exact same as the logical form for people and things. 

The above-analysis also applies for what used to be called “Generalized Referential Index”. The “referential index” refers to the subject of the sentence. 

GRI can be treated as if there were an implied all in front of it. 

For example, the sentence “people are weird” can be read as “All people are weird”.

Modal Operators 

There are 2 main types of Modal Operators: possibility (can and can’t) and necessity (must and mustn’t)

Here’s an example: The Red Sox can’t win the World Series this year

In this example, “can’t” starts off by operating on the VP who’s head word is “win”, and winds up controlling the whole sentence. 

We can also show the global influence of can’t by formally moving it to the front of the sentence and have it “work on” the entire proposition. 

The logical form of the above sentence is “not-possible p” where p stands for any proposition. 

Here’s how it would read according to professional logic: 

(It is) not possible (that): the Red Sox will win the World Series this year. 

This also happens when the key word is must (or mustn’t) rather than can’t.

You would still move the word to the front of the sentence, and the logical form of the sentence would be “necessary p” where p stands for any proposition. 

Here’s an example sentence: Glasgow Rangers must win the Scottish League cup

If we were to write this sentence in standardized logic, here’s how that would look like: 

(It is) necessary (that): Glasgow Rangers win the Scottish League cup

In conclusion, the “Generalization” patterns operate on the “Deletion” patterns. 

Distortion Patterns

Distortion patterns deal with the relation between 2 sentences aka complex sentences, where you get one sentence embedded inside of the other. The outside sentence comments or reports on the “inside” one. 

Lost Performative

Lost performative occurs when someone makes a rule or judgment without taking responsibility for it. 

Here’s an example sentence: It’s bad to be inconsistent. 

We could put it another way by saying “I say (that) it’s bad to be inconsistent.

In this example, the outer sentence is “I say (that) it’s bad to be inconsistent”

The inner sentence is “it’s bad to be inconsistent”

Note: A Verb Phrase can be generally regarded as a complete sentence. 

The headword in this sentence is “say”.

Mind Reading

Let’s take the example sentence “You don’t like me”.

You could put it another way by saying “I know (that) you don’t like me”

For this sentence, the head word is know.

Here’s a x-bar diagram to see how that looks like:

Complex Equivalence

In this pattern, the head word is “means”. The subject (or SPEC) is the external evidence and what it means is an internal conclusion. 

Here’s an example sentence: “You don’t bring me flowers means that you don’t love me anymore.”

Here’s an example sentence: “You make me feel bad”

In this case, “you” are the cause and “me feeling bad” is the effect. 

The head word for the outer sentence is  “make”. There’s also an embedded sentence or VP, built around the second verb “feel”. 

Cause & Effect

Here’s a diagram to illustrate this:

The Distortion patterns of the Meta Model all involve a shift in focus from the verb or head word inside the embedded sentence to the verb or head word of the “outside” sentence. They all involve some kind of knowledge predicate where you’re either knowing or saying, or else asserting that some kind of meaning or causing is going on. 

When using the words “makes” or “causes”, an opinion is being formed about what’s going on between the speaker and the world. 

For each Distortion pattern, the head verb (aka the knowledge predicate) operates on the embedded sentence on the right. 

In short, the Distortion Patterns are forms of projection where the person is adding something to their map or model which isn’t there in reality. 

Eric Robbie recommends that we refer to the Distortion Patterns as “Addition” instead.


We’ve saved the best one for last. And in some ways, it’s the most powerful of them all. 

Presuppositions operate on the rest of the Meta Model by virtue of entailment and logical form. 

Entailment occurs when one is able to draw necessary conclusions from a particular use of a word, phrase, or sentence. 

Here’s an example sentence: “If my husband knew how much I suffer, he wouldn’t do that.”

Given the above statement, we could ask standard Meta Model questions as follows: 

  • How are you suffering? (Unspecified Verb)

  • What is he doing? (Simple Deletion)

  • How do you know he doesn’t know? (Mind Reading)

  • How does his doing “that” cause you to suffer? (Cause & Effect) 

When we approach it this way, we tend to get typical responses. 

A better way would be to ask yourself “What must be there?” and you arrive at what’s already there in her model. 

Using the previous example, she could think that: 

  • She suffers. (Poorly Selected Verb)

  • He’s doing something. (Role-Player Deletion)

  • He doesn’t know/She knows he doesn’t know. (Mind Reading)

  • His doing that is causing her to suffer. (Cause & Effect)

The process of getting from what the woman said to formulating the above 4 questions is one that’s taken place in your mind. 

This process should match what must have gone in her own mind.

The act of reversing a presupposition (or multiple ones) is one of inference. 

You deduce, because she presupposes, the 4 statements given above, and then you respond to them.

Want the PDF version of this post?

Get instant access for offline viewing!


0 0 votes
Article Rating
Notify of
Oldest Most Voted
Inline Feedbacks
View all comments
Prasad Kulkarni
Prasad Kulkarni
3 years ago

Send me English version pdf

Jacob Laguerre
Reply to  Prasad Kulkarni
3 years ago

Hi Prasad,

You can get access to the pdf by clicking on one of the optin forms on the post.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Would love your thoughts, please comment.x