Linguistic relativity or the Sapir-Whorf hypothesis

From Wikipedia:
The linguistic relativity principle, or the Sapir-Whorf hypothesis[1] is the idea that differences in the way languages encode cultural and cognitive categories affect the way people think, so that speakers of different languages think and behave differently because of it. A strong version of the hypothesis holds that language determines thought that linguistic categories limits and determines cognitive categories. A weaker version states that linguistic categories and usage influence thought and certain kinds of non-linguistic behaviour.

The idea was first clearly expressed by 19th century national romantic thinkers, such as Wilhelm von Humboldt who saw language as the expression of the spirit of a nation. The early 20th century school of American Anthropology headed by Franz Boas and Edward Sapir also embraced the idea. Sapir’s student Benjamin Lee Whorf came to be seen as the primary proponent of the hypothesis, because he published observations of how he perceived linguistic differences to have consequences in human cognition and behaviour. Whorf’s ideas were widely criticised, and Roger Brown and Eric Lenneberg decided to put them to the test. They reformulated Whorf’s principle of linguistic relativity as a testable hypothesis, now called the Sapir-Whorf hypothesis, and conducted experiments designed to find out whether color perception varies between speakers of languages that classified colors differently. As the study of the universal nature of human language and cognition came in to focus in the 1960s the idea of linguistic relativity fell out of favor. A 1969 study by Brent Berlin and Paul Kay showed that color terminology is subject to universal semantic constraints, and the Sapir-Whorf hypothesis was seen as completely discredited.

From the late 1980s a new school of linguistic relativity scholars have examined the effects of differences in linguistic categorization on cognition, finding broad support for weak versions of the hypothesis in experimental contexts.[2] Effects of linguistic relativity have been shown particularly in the domain of spatial cognition and in the social use of language, but also in the field of color perception. Recent studies have shown that color perception is particularly prone to linguistic relativity effects when processed in the left brain hemisphere, suggesting that this brain half relies more on language than the right one.[3] Currently a balanced view of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways but that other processes are better seen as subject to universal factors. Current research is focused on exploring the ways in which language influences thought and determining to what extent.[2] The principle of linguistic relativity and the relation between language and thought has also received attention in varying academic fields from Philosophy to Psychology and Anthropology, and it has also inspired works of fiction and the invention of constructed languages.

Present status

Current researchers such as cognitive scientist Lera Boroditsky of Stanford University believe that language influences thought, but in more limited ways than the broadest early claims. Exploring these parameters has sparked novel research that increases both scope and precision of prior examinations. Current studies of linguistic relativity are neither marked by the naive approach to exotic linguistic structures and their often merely presumed effect on thought that marked the early period, nor are they ridiculed and discouraged as in the universalist period. Instead of proving or disproving a theory, researchers in linguistic relativity now examine the interface between thought, language and culture, and describe the degree and kind of interrelatedness. Usually, following the tradition of Lenneberg, they use experimental data to back up their conclusions.

Programming languages

Kenneth E. Iverson, the originator of the APL programming language, believed that the Sapir–Whorf hypothesis applied to computer languages (without actually mentioning the hypothesis by name). His Turing award lecture, “Notation as a tool of thought”, was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms.[43]

The essays of Paul Graham explore similar themes, such as a conceptual hierarchy of computer languages, with more expressive and succinct languages at the top. Thus, the so-called blub paradox (after a hypothetical programming language of average complexity called ‘Blub’) says that anyone preferentially using some particular programming language will ‘know’ that it is more powerful than some, but not that it is less powerful than others. The reason is that writing in some language means thinking in that language. Hence the paradox, because typically programmers are “satisfied with whatever language they happen to use, because it dictates the way they think about programs”.[44]

In a 2003 presentation at an open source convention, Yukihiro Matsumoto, creator of the programming language Ruby, said that one of his inspirations for developing the language was the science fiction novel Babel-17, based on the Sapir-Whorf Hypothesis [45]


© Copyright 2010 Sorin Mustaca, All rights Reserved. Written For: Sorin Mustaca on Cybersecurity


Check www.endpoint-cybersecurity.com for seeing the consulting services we offer.

Visit www.itsecuritynews.info for latest security news in English
Besuchen Sie de.itsecuritynews.info für IT Sicherheits News auf Deutsch