June 9, 2022

NLP Representational Systems

0  comments

Introduction

When I was in school, I learned about the 5 basic senses. 

Those senses are sight, hearing, touching, smelling, and tasting. 

NLP gives them a slightly different name called Representational Systems or VAK for short.

According to the NLP Model, all possible distinctions humans can make in regards to their environment, both internal and external, can be usefully represented in terms of these systems. 

In fact, all skills function by the development and sequencing of these representational systems.

NLP Representational Systems Overview

The 5 main NLP Representation Systems are Visual (sight), Auditory (hearing), Kinesthetic (bodily sensations), Olfactory (smelling), and Gustatory (tasting). 

Out of those 5, Visual, Auditory, and Kinesthetic are considered the main ones.

Other representational systems include Auditory-Digital (self-talk), Auditory-tonal (sounds and music), Kinesthetic-visceral (gut sensations), Kinesthetic-tactile (touch), and Kinesthetic-meta (emotions). 

Each representation system has a 3-part network: Input, Representation/Processing, and output. 

The first stage involves gathering information from the environment. 

The second stage involves mapping the environment and establishing behavioral strategies such as learning, thinking, deciding, etc. 

The final stage, or output, is the “casual transform of the representational mapping process”.

All output is behavior, and behavior is activity within any of the representational system complex at any of the stages. In other words, seeing, feeling, and hearing, are all forms of behavior.

Representations by themselves are meaningless. We can only determine the significance of a representation system by how it functions in the context of a strategy in a human’s behavior.

Representations can serve as a limitation or resource, depending on how it’s being used. 

For example, if you were to take an artist and a schizophrenic and have both of them engage in visualization, the visualization would be far more productive for the artist than it would be for the schizophrenic. 

General Characteristics Based on Primary Rep System

As human beings grow from infants to adulthood, we tend to prefer one representational system over all the others.

Your primary representation system plays a significant role when it comes to your personality type. Studies have also shown a direct correlation between an individual’s primary representation system and certain physiological and psychological characteristics. 

Here’s a brief overview of each one: 

Visual People

Visual people tend to stand or sit with their bodies erect and their eyes looking upward. They tend to have shallow breathing and it’s high in the chest. They are easily distracted by noise. They also learn and memorize things by seeing pictures. They make up about 60 percent of the population. 

Auditory People

Auditory people tend to move their eyes from side to side. They have regular and rhythmic breathing in the middle of their chest. They tend to be very good with words and learn best by listening. They also tend to lean forward while talking. They make up 20 percent of the population. 

Kinesthetic People

Kinesthetic people often use words that indicate motion, sensation, and/or action. They breathe deep into the stomach. They tend to move slowly. They enjoy closeness with other people. They feel deeply and love deeply. They make up about 20 percent of the population. 

Auditory-Digital People

These people operate at a meta-level of awareness above the sensory level of VAK. They tend to come off like a “computer”. They have a monotone voice and their lips are thin and light. 

Building Rapport With Others

One of the best ways to get into rapport with someone is to use words that match their primary representation system. 

You can get an idea of what someone’s primary representation system is by listening to the kinds of predicates they use. 

Here are some examples based on each representation system:  

Visual: appear, glow, graphic, sparkle, vivid, reflect, colorful, cloudy

Auditory: harmonize, explain, echo, inquire, complain, discuss, talk, request

Kinesthetic: grapple, exciting, smooth, run, comfortable, warm, work

Olfactory/Gustatory: bitter, spicy, sweet, stale, savor, odor, fresh

Here are some predicate phrases you can listen out for as well: 

Visual: paint a picture, eye to eye, bird’s eye view, beyond a shadow of a doubt, see to it

Auditory: loud and clear, unheard of, rings a bell, hold your tongue, manner of speaking

Kinesthetic: hand in hand, tap into, turn around, pain-in-the-neck, pull some strings

You can also notice what representational system a person is using by their eye movements. 

In NLP, this is known as eye-accessing cues and it was originally discovered by Richard Bandler and John Grinder during their in-person trainings. 

They noticed that whenever they asked questions, their students’ eyes tended to look the same way before answering. 

Eye movements do not create internal experience, they simply reflect the internal neurological processing that’s already occurring. On the flip side, you can voluntarily control your eye movements to stimulate the corresponding representational systems.

Here’s a typical diagram of eye-accessing cues:

This diagram shows the eye movements of a person that’s normally organized. This is from the perspective of looking directly at a person. 

People who are left-handed or cerebrally-reversed will move their eyes in the opposite direction.

As a quick tip, you can tell if someone is normally organized or not depending on which wrist they wear their watch. 

If they wear their watch on their left hand, then they’re normally organized. 

If it’s on the right hand, then they’re reverse organized. 

This is because time is an auditory-digital sense and people tend to wear their watch where they access auditory-digital. 

NLP Submodalities 

The brain represents all experience using modalities and the quality or properties of those modalities. 

These qualities or properties are known as submodalities. 

Submodalities allow us to speak with greater precision about the content of our thoughts. 

We can make finer distinctions in our internal representations and these distinctions are what create the messages and commands for how to feel and respond. 

This is why it’s a common NLP saying that “submodalities drive behavior”. 

Submodalities can be broken down into 2 types: digital and analog submodalities. 

Digital submodalities are either on or off. There’s no in-between. For example, color or black-and-white would be considered a digital submodality. A picture or movie is another example of a digital submodality. 

Analog submodalities exist along a continuum. Examples include loudness, brightness, blurriness, and contrast.

When certain submodalities get altered, they may alter other submodalities as well like a chain reaction. We call these submodalities driver submodalities. 

Going Meta

In order to detect the specific qualities and attributes of a particular modality, you have to go “meta” or above the internal representation.

Human beings have 2 levels of thought. The first level of thought is known as the primary state. The primary state is our everyday states of consciousness where we experience thoughts-and-feelings about the outside world.

The second level is known as the meta-states. This is where we have thoughts about our thoughts, feelings about feelings, and states about states. 

At the second level, this is where our thoughts and emotions relate to and about the world “inside” ourselves.

Our beliefs exist on this level as well, which is why it’s often insufficient to change beliefs by shifting submodalities, which operate at a lower level.

In order to believe in something, you have to say “yes” to the representation. 

In order to disbelieve in something, you have to say “no” to the representation. 

In order to turn a thought into a belief, or a belief into just a mere thought, you have to move to a meta-level and either confirm or disconfirm the thought. We need to shift the submodalities that affect the saying of “yes” or “no” to the representation.

Think about an experience, then think about how you THINK about the experience. 

Is it bright or dark?

Is it a picture or a movie?

Is it blurry or in focus?

By asking these kinds of questions, you’re able to put yourself into a higher frame of mind where you can start altering the qualities of your internal representations.


Tags

neuro-linguistic programming, NLP, nlp representational systems


5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Processing...
0
Would love your thoughts, please comment.x
()
x

FREE eBook: NLP Language Patterns