Make your own free website on

Bulletin of The British Psychological Society (1981), 34, 128-129.


John J. Ray

There can be little doubt that most psychologists would like to base their research on random samples of the general population. Because of financial and ethical restraints, however, well over 90 per cent of published work is based on animals or available groups (non-samples) of students. This does of course raise profound questions about the generalizability and relevance of much that we do. Examples are hardly required to show that a relationship observed in one group of subjects may be totally absent in another. My own favourite example is from the very first piece of research I did. I correlated serial verbal learning and verbal fluency on a group of students and found a correlation of 0.8. Such a strong finding would surely find its way into many a journal. Yet when I replicated the experiment on a group of Army recruits, the relationship dropped to non-significance! Clearly, the student-based findings should not be generalized and must remain of unknown significance.

Yet general population sampling is not all that difficult. Using the methods of the public opinion polls (cluster sampling), I find that I can get a community sample of 100 people for a questionnaire study in about three weeks of hard doorknocking. Obviously not all of psychology can be reduced to questionnaires, but that part which can could easily employ samples rather than non-samples.

The major difficulty, however, is a very paradoxical one indeed. Journal editors who cheerfully let by study after study based on non-samples of students are almost invariably fierce critics of community samples. It is much easier to get studies based on non-samples published! I myself would probably never believe this if I submitted only the statutory one or two articles per year for publication. As it is, however, I submit to academic journals around the world up to 40 articles per year which are usually based on general population sampling such as I have described above. Although most of these articles do eventually get published, they never appear in first-ranking journals and they are almost never accepted the first time around. And the commonest reason by far for the many rejections I receive is a claim that the sampling was inadequate.

I could gladly accept a claim that my reasoning was defective, my expression poor or my interests trivial but what gives offence is in fact my sampling. I do occasionally point out to editors that the sampling they reject is at least better than 95 per cent of what they publish but this of course engenders defensiveness and I never get any good idea why real samples are treated so critically.

I believe that much of the problem may lie in a tendency for editors and referees to judge one's work relative to one's own attempted goal rather than by any absolute standard. Commendable though this could certainly be in some circumstances, it does lead to the paradox that the less one attempts, the more acceptable one's achievement is. It means that a group of 30 third-year students in a particular course at a polytechnic may be acceptable as a data source but a random doorstep cluster sample of 100 people gathered throughout a major metropolitan area may not.

The ostensible reason given by editors for questioning a general population sample is remarkably uniform. If it is a doorstep sample they will object to the fact that it is clustered (in cluster sampling it is starting-points rather than people that are randomly selected) while if it is a postal sample they will express concern at the non-response rate. Both types of sample may also be criticized for the small n. While such objections are not without some substance, it must be realized that commercial public opinion polls almost always use cluster sampling, generally have a massive non-response rate and yet still provide highly accurate predictions of things like percentage vote for political candidates. It is true that the n for public opinion polls is generally in the thousands rather than in the hundreds but this is only because a very exact estimate of parameters is required. To a political candidate seeking office, a half per cent can mean the difference between success and failure. To a social scientist, by contrast, an effect explaining less than 5 per cent of the variance would probably be of little interest. Yet with an n of 100 a correlation explaining as little as 3.8 per cent of the variance can be shown as significant (Edwards, 1960, p. 362). Large samples are not required for the purpose to which social scientists generally put them. Even if the usual editorial objections to community samples were more substantial, however, the point still remains that the usual (non-) sample of students is surely even more susceptible to them. Yet non-samples of students fill the pages of our journals!

A possible hypothesis that might occur to one at this juncture is that results based on available groups of students (non-samples) are probably presented more tentatively. Perhaps community samples are treated more suspiciously only because the claims made for them are larger. A glance at almost any issue of any psychological journal will dispel this hypothesis. Authors may possibly have the proper reservations in the back of their minds but the findings are generally presented as telling us something about processes of a quite general kind. People talk as if they were studying (for example) 'attribution' - not just 'attribution among students'.

Clearly, this is an area where some sort of absolute standards are required if better based research is to be encouraged rather than discouraged. One example of such standards might be the lone policy of the Journal of Social Psychology, which simply states that it gives preference to community-based research.


Edwards, A. E. (1960). Experimental Design in Psychological Research. New York: Holt, Rinehart & Winston.

Go to Index page for this site

Go to John Ray's "Tongue Tied" blog (Backup here or here)
Go to John Ray's "Dissecting Leftism" blog (Backup here or here)
Go to John Ray's "Australian Politics" blog (Backup here or here)
Go to John Ray's "Gun Watch" blog (Backup here or here)
Go to John Ray's "Education Watch" blog (Backup here or here)
Go to John Ray's "Socialized Medicine" blog (Backup here or here)
Go to John Ray's "Political Correctness Watch" blog (Backup here or here)
Go to John Ray's "Greenie Watch" blog (Backup here or here)
Go to John Ray's "Food & Health Skeptic" blog (Backup here)
Go to John Ray's "Leftists as Elitists" blog (Not now regularly updated -- Backup here)
Go to John Ray's "Marx & Engels in their own words" blog (Not now regularly updated -- Backup here)
Go to John Ray's "A scripture blog" (Not now regularly updated -- Backup here)
Go to John Ray's recipe blog (Not now regularly updated -- Backup here or here)

Go to John Ray's Main academic menu
Go to Menu of recent writings
Go to John Ray's basic home page
Go to John Ray's pictorial Home Page (Backup here)
Go to Selected pictures from John Ray's blogs (Backup here)
Go to Another picture page (Best with broadband)