Network Model A special case of the hierarchic...

Semantic Memory

(1) Network Model (Quillian 1968)

Artificial intelligence researcher wanting to make a repository for moon rock data. His problem was how do you encode a lot of related information in a small amount of space. He proposed a network model that may have relevance to human memory: Teachable Language Comprehender (TLC).

TLC Structure

Designed to learn from texts and answer questions

Memory consists of a hierarchy of interconnected concepts, called nodes (squares).

Nodes connected to higher levels by “isa” links (arrows)

Property links store information

TLC Properties

Information is stored at the highest possible level in the hierarchy.

Nodes inherit the properties stored at high levels.

Allows for cognitive economy: information not directly stored can be inferred

Cognitive economy example

Do aardvarks have livers?

You can answer this question correctly even though you’ve never been taught this or thought about it before. It can be inferred because animals have livers, and aardvarks are animals.

Testing TLC (Collins and Quillian 1969)

Used a speeded verification task:

Subjects answers questions posed in the following form: “Is a ________ a _______?”

Is watermelon a fruit?

Does a typhoon have skin?

Dependent measures: reaction time and accuracy.

TLC Predictions

Reaction time will be affected by the number of levels that must be traversed to verify sentences.

More levels ====> slower response time

Does a shark swim? Go up one level to fish level → fish one level away

Does a canary eat food? → bird → animal (two levels away)

TLC Results

“Yes responses”

Prediction supported; more levels traversed → longer reaction time

“No” Responses

Prediction not supported. Subjects took longer to respond no to is a canary purple? Than to respond yes to Is a canary purple?

Response times should be the same.

Problems with TLC

(1) typicality – TLC assumes that all members of a category are equivalent

But some members of a category are more typical than others: Is a robin a bird? (faster) versus Is a turkey a bird? “Robin” sentence faster because “Robin” is a more typical member of the category “bird”.

(2) Cognitive economy – Conrad (1972) gave subjects a pair of sentences like: a rabbit has ears (faster)… and a rabbit can move…….

Has ears is faster, presumably because, it’s stored directly with the concept of “rabbit”

This is a violation of congnitve economy.

Cognitive economy: strong form = info always stored as high as possible in the hierarchy. Weak form = info usually stored as high as possible. Exceptions are allowed (rabbitt and has ears).

(3) Nail in the coffin though – Hierarchical Organization – (Rips, Shoben and Smith) (1973)

A pig is a mammal (should be faster)

A pig is an animal

TLC prediction

Pig → mammal → animal (two levels), which should be slower than

Pig → mammal (one level)

They identified that a pig is an animal much faster.

Solving the Problems: Maybe TLC could be patched up; maybe an entirely new approach was needed

A spreading activation graph from a single ori...

(2) Spreading Activation Model (Collins and Loftus 1975)

Retained the idea of nodes linked in a network but hierarchical org. was scrapped. New organizing principle: semantic distance (closely related concepts stored near each other). Distance represents typicality.

Semantic distance (lengths of links) takes care of typicality. “Isnota” link is added.

Spreading activation (heart of the model) - activated concepts spread their activation to other concepts within that network. Like a rock rippling water

Example

Does a fish eat food.

Does a salmon have an udder?

(3) Feature Comparison Model (smith, shoben, and rips (1974)

Network models too flawed to work

Concepts are stored in memory as sets of features.

Some features are defining (central)

Some features are characteristic (less important or accidental)

Concept:  “bird”

Defining features: warm blooded and lays eggs

Characteristic features: flies, eats worms, hops around on ground, sings sweetly

Feature comparison

Stage 1 – Global comparison (not picky) looking for defining and characteristic features. Similarity index computed for the two feature lists. Lots of similarities (is a robin a bird?) leads to a fast “yeS” response. Little similarity leads to….

Stage 2 – Occurs only when similarity is intermediate. All and only the defining features are compared.

If all defining features match: “Yes” (A whale is a mammal”)

If any of the defining features don’t match: No

Stage 1 processing is holistic (not picky, counting everything), fast, intuitive, error-prone.

Stage 2 processing is selective, slow, logical, error free.

Later developments

Mostly elaborations of network models.

Some cognitive economy retained.

Feature comparison model has zero cognitive economy.

Related articles