Literature review- Article critique
Undertake a literature search on the topic ‘The impact of culture on learning for international students studying in Australia’. Select two academic journal articles and perform an article critique on each. | |||
Instruction: |
The assessment requires you to select two academic journal articles the topic area ‘The impact of |
BRE810 S116 Page 1 of 1 Assessment Item 2 Literature Review- Article critique Description/Focus: Literature review- Article critique Value: 35 % Due date: Midnight Friday Week 10 Length: 2500-3000 words Task: Undertake a literature search on the topic ‘The impact of culture on learning for international students studying in Australia’. Select two academic journal articles and perform an article critique on each. Presentation: Submission in a word document only. Include appropriate headings; font Arial 11. CDU Harvard referencing style required. Submissions via Learnline through safeassign submission point. Assessment criteria: Refer to the assessment rubric on learnline Background to the task The assessment requires you to select two academic journal articles the topic area ‘The impact of culture on learning for international students studying in Australia’ and perform an article critique on each. An academic article critique requires you to engage with the articles that you have selected by summarising key points or arguments and evaluating or critiquing the way in which the author conducted or has presented the research. You will need to report the articles main findings or ideas, contribution to knowledge and or methodologies used in the research, along with limitations, and or identification of future research. It is expected that you will critique these articles based on the concepts and understandings that you have gained about conducting research from this unit and the knowledge that you have in the area you have selected. Hint: Select your articles carefully as not all academic articles will present you with information that will enable you to critique easily, or provide you with the opportunity for thoughtful engagement. Also, remember what you have been learning in class. Some useful sources Note: to access the links copy and paste into your internet browser Charles Darwin University, Academic assignment- other assignment types- critiques, accessed 15.06.15 at http://learnline.cdu.edu.au/studyskills/studyskills/critiques.html University of the Fraser Valley, Article review/critique, accessed on 15.06.15 at http://www.ufv.ca/media/assets/writing-centre/article+review+and+critique.pdf University of New South Wales, Writing a critical review, accessed on 15.06.15 at https://student.unsw.edu.au/writing-critical-review
Page 1 of 2 BUSINESS RESEARCH Assessment 2 Rubric – Literature review-Article Critique 35% of the final grade Criteria HD 85-100% D 75-84% C 65-74% P 50-64% F 0-49% Weighting 10 8-9 6-7 5 <5 grade Completeness Complete in all respects; reflects all requirements. Fully addresses all aspects of the writing assignment. Stays on task throughout. Complete in most respects; reflects most requirements. Clear, controlled and focussed direction. Clearly linked and logically ordered points and highly relevant and well developed details Incomplete in many respects; reflects few requirements. Focussed direction and basic logical order of points. Incomplete in most respects. Direction is not very focussed. Flow of the content is a bit flawed. Does not reflect requirements. No direction. Flow of the report is majorly flawed. /10 Weighting 10 8-9 6-7 5 <5 grade Language/Present ation Clear and fluent expression indicates the report has been successfully edited and proofread before submission. Correct form for text type: (headings, indentations etc.); spelling, punctuation errorfree. Mastery of sentence patterns demonstrated; may have occasional grammatical errors on the sentence level suggesting that some closer proofreading was needed Form, punctuation, and spelling mostly error free. Sentence patterns most often successfully used; several grammatical errors at the sentence level. Occasional errors in, form, punctuation, and spelling; sometimes distracting. Some additional editing and proofreading is warranted. Simple and complex sentences attempted but often unsuccessfully; grammatical errors distract from meaning. Form, punctuation, and spelling errors are distracting. Run on sentences, attempts at simple sentences often not successful; many errors in sentence structures detract from communication purpose. Form, punctuation and spelling, errors throughout. /10 Weighting 10 8-9 6-7 5 <5 grade Understanding Demonstrates a sophisticated understanding of the topic(s) and issue(s) Demonstrates an accomplished understanding of the topic(s) and issue(s) Demonstrates an acceptable understanding of the topic(s) and issue(s) Demonstrates a basic understanding of the topic(s) and issue(s) Limited or no understanding of the topics and issues /10 Weighting 25-30 21-24 18-20 15-17 <15 grade Content Comprehensive and lucid exploration and thoughtful, well researched understandings of articles. Report discussion& analysis addresses: -Key arguments -Contribution to knowledge -Methodology -Limitations -Ethical considerations A good discussion of articles. Report discussion& analysis addresses: -Key arguments -Contribution to knowledge -Methodology -Limitations -Ethical considerations A fair discussion of articles. Little detail or information provided about the articles. Report mostly satisfies requirements. Not much detail provided about the articles. The report lacks succinctness. The content doesn’t provide clear and concise information. No details provided about the articles. The report lacks succinctness and clarity. /30 Page 2 of 2 Weighting 18-20 16-17 13-15 10-12 <10 grade Critical analysis An extensive critical analysis of selected journal articles. Good critical analysis of selected journal articles. Reasonable attempt to critically analyse selected journal articles. Some analysis of selected journal articles, mainly descriptive. No critical analysis of selected journal articles. Mainly definitional. /20 Weighting 10 8-9 6-7 5 <5 grade Conclusions Excellent ability to interpret, evaluate and formulate logical sound conclusions. Good demonstration of the capacity to critically analyse information and formulate own conclusions. Able to draw warranted conclusions and generalisations. Limited ability to draw conclusions. No critical analysis, poor conclusions and no original thought. /10 Weighting 10 8-9 6-7 5 <5 grade Research and referencing Excellent reference list and application of in text references. Referencing presented correctly. (7-8 additional references used) Very good use of high academic standard articles. Proper referencing done as per the standard requirements. (5-6 additional references used) Good articles used. Above average standard of referencing requirements. (5 additional references used) Average use of journal articles. Most of the content taken from non-academic journals. Not meeting all the referencing requirements. (3- 4 additional references used) No journal articles referred. No referencing requirements met or Plagiarised /10 Total /100
1 As researchers are always working under time and budget constraints, it is pertinent to consider whether the data needed to examine the research questions is already available. This session will help students to define, understand, source and present the different types of secondary data available appropriate to the research topic and questions being examined. The outcomes of this session are: 1.Discuss the advantages and disadvantages of secondary data 2.Define types of secondary data analysis conducted by business research managers 3.Identify various internal and proprietary sources of secondary data 4.Give examples of various external sources of secondary data 5.Understand and describe what a literature review is, its purpose and how to identify a gap in the literature 6.Understand how to source and present appropriate literature 7.Understand how to critique secondary data 2 3 Research projects should begin with secondary data, which are gathered and recorded by someone else prior to, and for purposes other than, the current project. Secondary data usually are already assembled. They require no access to respondents or subjects. The primary advantage of secondary data is their availability. Obtaining secondary data is almost always faster and less expensive than acquiring primary data. This is particularly true when researchers use electronic retrieval to access data stored digitally. In many situations, collecting secondary data is instantaneous and free. Secondary data are essential in instances when data simply cannot be obtained using primary data collection procedures. For example, a manufacturer of farm implements could not duplicate the information in the Census of Agriculture due to the sheer amount of information and the fact that much of the information (for example, amount of taxes paid) might not be provided to a private firm. Similarly, in India researchers use census estimates to track sensitive topics like child labor rates, which would be simply overwhelming for a private organization to undertake. An inherent disadvantage of secondary data is that they were not designed to meet the researchers’ specific needs. Thus, researchers must ask how pertinent the data are to their particular project.To evaluate secondary data, researchers should ask questions such as these: ■ Is the subject matter consistent with our problem definition? ■ Do the data apply to the population of interest? ■ Do the data apply to the time period of interest? ■ Do the secondary data appear in the correct units of measurement? ■ Do the data cover the subject of interest in adequate detail? 4 When secondary data are reported in a format that does not exactly meet the researcher’s needs, data conversion may be necessary. Data conversion (also called data transformation) is the process of changing the original form of data to a format more suitable for achieving a stated research objective. For example, sales for food products may be reported in pounds, cases, or dollars. An estimate of dollars per pound may be used to convert dollar volume data to pounds or another suitable format. Another disadvantage of secondary data is that the user has no control over their validity—a topic we will discuss in more detail later. For now, think of this as representing data accuracy or trustworthiness. Although timely and pertinent secondary data may fit the researcher’s requirements, the data could be inaccurate. The research methods used to collect the data may have some- how introduced bias to the data. For example, media often publish data from surveys to identify the characteristics of their subscribers or viewers. These data will sometimes exclude derogatory data from their reports. Good researchers avoid data with a high likelihood of bias or for which the overall accuracy cannot be determined. Researchers should verify the accuracy of the data whenever possible. Cross-checks of data from multiple sources, should be made to determine the similarity of independent projects. When the data are not consistent, researchers should attempt to identify reasons for the differences or to determine which data are most likely to be correct. If the accuracy of the data cannot be established, the researcher must determine whether using the data is worth the risk. 5 Exhibit 8.1 illustrates a series of questions that should be asked to evaluate secondary data before they are used. 6 The simplest form of secondary data research is fact-finding. For example, a restaurant serving breakfast might be interested in knowing what new products are likely to entice consumers. A typical objective for a secondary research study might be to uncover all available information about consumption patterns for a particular product category or to identify demographic trends that affect an industry. Business researchers are challenged to constantly watch for trends in the marketplace and the environment. Market tracking is the observation and analysis of trends in industry volume and brand share over time. Scanner research services and other organizations provide facts about sales volume to support this work. Almost every large consumer goods company routinely investigates brand and product category sales volume using secondary data. In many instances, the purpose of fact-finding is simply to study the environment to identify trends. Environmental scanning entails information gathering and fact-finding designed to detect indications of environmental changes in their initial stages of development. The Internet can be used for environmental scanning; however, there are other means, such as periodic review of contemporary publications and reports. Secondary data, but for its name, may seem to lack power compared to primary data. However, with secondary data, researchers can test research questions that would be difficult to examine any other way. For example, what firms are the market leaders in their industry? what matters when it comes to firm performance? Does customer satisfaction ultimately lead to superior firm performance? Given that firm performance is a property of the company and not of its customers or employees, the researcher cannot directly capture this with surveys. Therefore, researchers turn to secondary data to try to isolate controllable variables that drive firm performance. 7 8 The second general objective for secondary research, model building, is more complicated than simple fact-finding. Model building involves specifying relationships between two or more variables, perhaps extending to the development of descriptive or predictive equations. Models need not include complicated mathematics, though. In fact, decision makers often prefer simple models that everyone can readily understand over complex models that are difficult to comprehend. For example, market share is company sales divided by industry sales. Although some may not think of this simple calculation as a model, it represents a mathematical model of a basic relationship. 9 Large corporations’ decision support systems often contain millions or even hundreds of millions of records of data. These complex data volumes are too large to be understood by managers. The term data mining refers to the use of powerful computers to dig through volumes of data to discover patterns about an organization’s customers and products. Neural networks are a form of artificial intelligence in which a computer is programmed to mimic the way that human brains process information. One computer expert put it this way: A neural network learns pretty much the way a human being does. Suppose you say “big” and show a child an elephant, and then you say “small” and show her a poodle. You repeat this process with a house and a giraffe as examples of “big” and then a grain of sand and an ant as examples of “small.” Pretty soon she will figure it out and tell you that a truck is “big” and a needle is “small.” Neural networks can similarly generalize by looking at examples. One way to find out what people are thinking these days is to read what they are posting on their blogs. But with tens of millio
ns of blogs available on the Internet, there is no way to read them all. One solution: data-mining software designed for the blogosphere. 10 11 Market-basket analysis is a form of data mining that analyzes anonymous point-ofsale transaction databases to identify coinciding purchases or relationships between products purchased and other retail shopping information. Consider this example about patterns in customer purchases: Osco Drugs mined its databases provided by checkout scanners and found that when men go to its drugstores to buy diapers in the evening between 6:00 p.m. and 8:00 p.m., they sometimes walk out with a sixpack of beer as well. Knowing this behavioural pattern, supermarket managers may consider laying out their stores so that these items are closer together. Customer discovery is a data-mining application that similarly involves mining data to look for patterns that can increase the value of customers. For example, Macy’s commissioned data-mining techniques looking for patterns of relationships among the huge volumes of previous sales records. In 2011, Macy’s sent out millions of catalogues. Not every customer got the same catalogue though and in fact, tens of thousands of version of the catalogue were carefully tailored to specific customers. 12 CRM (customer relationship management) systems are decision support systems that manage the interactions between an organization and its customers. A CRM maintains customer databases containing customers’ names, addresses, phone numbers, past purchases, responses to past promotional offers, and other relevant data such as demographic and financial data. Database marketing is the practice of using CRM databases to develop one-to-one relationships and precisely targeted promotional efforts with individual customers. 13 Secondary data can be classified as either internal to the organization or external. Modern information technology makes this distinction seem somewhat simplistic. Some accounting documents are indisputably internal records of the organization. Researchers in another organization cannot have access to them. Clearly, a book published by the federal government and located at a public library is external to the company. However, in today’s world of electronic data interchange, the data that appear in a book published by the federal government may also be purchased from an online information vendor for instantaneous access and subsequently stored in a company’s decision support system. Internal data should be defined as data that originated in the organization, or data created, recorded, or generated by the organization. Internal and proprietary data is perhaps a more descriptive term. Sources of internal and proprietary secondary data include: • Accounting information • Sales information and backorders • Customer complaints, service records, warranty card returns, and other records. • Intranets 14 External data are generated or recorded by an entity other than the researcher’s organization. The government, newspapers and journals, trade associations, and other organizations create or produce information. Traditionally, this information has been in published form, perhaps available from a public library, trade association, or government agency. Today, however, computerized data archives and electronic data interchange make external data as accessible as internal data. Because secondary data have value, they can be bought and sold like other products. Many users, such as Fortune 500 corporations, purchase documents and computerized census data directly from the government. However, many small companies get census data from a library or another intermediary or vendor of secondary information. 15 Because secondary data have value, they can be bought and sold like other products. And just as bottles of perfume or plumbers’ wrenches may be distributed in many ways, secondary data also flow through various channels of distribution. Many users, such as Fortune 500 corporations, purchase documents and computerized census data directly from the government. However, many small companies get census data from a library or another intermediary or vendor of secondary information. The following is a list of commercial sources from which you can get data: Market-share data Demographic and census updates Consumer attitude and public opinion research Consumption and purchase behavior data Advertising research When thinking about a literature review it can be broken down into 2 parts. Process and product Process- reading, reading and more reading, taking notes and making decisions around which articles and literature is pertinent to your research. Product- the literature review! 16 “ the production of new knowledge is dependent on past knowledge”. You need to know what you are doing and know how your research is going to make a difference! The word literature in social science research refers to research that has already been carried out and published. Although lecturers will encourage you to use academic journal articles do not forget that some literature is also published in the media. Media reports of research tend to be very short and generally what is published in the media is a brief synopsis of the research usually without any proper reference to the theoretical framework within which the research project was situated. It is generally only peer reviewed sources that are used in compiling a literature review for a research project. The first thing you need to do when writing a literature review is have a plan for its structure. Sketch your plan for the literature review in your research diary and then write to that plan. To begin with there will be an introduction followed by a number of sub sections and the chapter ends in a summary. The introduction should be an introduction to the chapter, nothing else or nothing more. The summary is a summary of the chapter, nothing else, nothing more. So I guess you could say it is like writing a separate piece of work within another work. The main body sections of your literature review are each developed around individual sub headings that are derived from the conceptual framework. The headings should be presented in a logical way that makes sense. 17 Purposes of a literature review – Inform readers of developments in the field- past research, important works, The Kolb’s of the world – Establish research credibility – Argue the need for and relevance of their study 18 There is no point in conducting research if it has already been done, that is if your research question has already been answered. 19 Some key step involved in a literature review include: Searching for research literature efficiently, finding what you need quickly, finding the full text online when available, and avoiding an avalanche of irrelevant references. Hint: Your favourite search engine will not find most of the scholarly literature! Assessing individual reports of research literature to determine whether their findings and conclusions should be relied upon or are likely to be misleading. Hint: Some of the research literature on almost every topic is misleading or trivial. Integrating the various studies on a topic to make the best assessments of what is known about the topic, to identify promising future research, to improve conceptual frameworks for research, and to determine the advantages and disadvantages of previously used methodologies. In a literature review one can use many different sources. Before one begins finding sources they may want to really define their topic and pick out key words to search. Once one has that done, they can look in many places to find good sources like: books, journals, reference material, look at printed abstracts, dissertations, conference papers, and even the internet. Ideally, scholarly journals and books are the most helpful. There are many journals dedicated to research on a certain topic. 20 The above slide includes some tips for finding a good article for literature review. It’s important that you be able to demonstrate your familiarity with existing work in the field. The literature review
is one common means of demonstrating this familiarity. According to educational psychologist John Creswell, author of numerous research design texts, the literature review does several things: It shares with the reader the results of other studies that are closely related to the study being reported; It relates a study to the larger ongoing dialogue in the literature about a topic, filling in gaps and extending prior studies; and It provides a framework for establishing the importance of the study as well as a benchmark for comparing the results of a study with other findings. The following criteria might assist in the process: • Provenance: What are the author’s credentials? Are the author’s arguments supported by evidence? • Objectivity: Is the author’s perspective even-handed or prejudicial? • Persuasiveness: Which of the author’s theses are most/least convincing? • Value: Are the author’s arguments and conclusions convincing? 21 22 1) Making own assertions with appropriate citations to support and by including explicit linking words and phrases to show connection between citations and the different sections and chapters in the text. 2) Making own position clear in relation to the source material that you incorporate and being explicit about how you will be drawing in on particular aspects of previous work for your own research. By doing this you are bringing your writers voice to the front and making it clear that you are using your source texts to suit your own purposes rather than hiding behind the authority of the cited authors. 23 24
1 This session provides students with the knowledge to determine what needs to be measured in order to address a research question or hypothesis. The topic of sampling in research will also be addressed. Students will be introduced to the process of identifying a target population and selecting a sampling frame and understand why researchers take a sample rather than a complete census. The outcomes for this session are: 1. Determine what needs to be measured to address a research question or hypothesis 2. Distinguish levels of scale measurement 3. Know how to form an index or composite measure 4. List the three criteria for good measurement 5. Explain reasons for taking a sample rather than a complete census 6. Describe the process of identifying a target population and selecting a sampling frame 7. Compare random sampling and systematic (non- sampling) errors 8. Identify the types of nonprobability sampling, including their advantages and disadvantages 9. Summarize the advantages and disadvantages of the various types of probability samples 10. Discuss how to choose an appropriate sample design, as well as challenges for Internet sampling 2 The opening vignette is about a manager of an organisation that plans to undertake the task of measuring performance within the organisation. He brings in many of the operational managers to discuss how employees will be measured on performance, what is performance and what should be measured? Many of the mangers argue different aspects of the employees role that should be measured. The owner/managers situation in this vignette illustrates how difficult it can be to define, let alone measure, important business phenomena. While some items can be measured quite easily, others present tremendous challenges to the business researcher. 3 4 The decision statement, corresponding research questions, and research hypotheses can be used to decide what concepts need to be measured in a given project. Measurement is the process of describing some property of a phenomenon of interest, usually by assigning numbers in a reliable and valid way. The numbers convey information about the property being measured. When numbers are used, the researcher must have a rule for assigning a number to an observation in a way that provides an accurate description. For example: Measurement can be illustrated by thinking about the way instructors assign students’ grades. A grade represents a student’s performance in a class. Students with higher performance should receive a different grade than do students with lower performance. Even the apparently simple concept of student performance is measured in many different ways. Consider the following options: A student can be assigned a letter corresponding to his/her performance. a. A- Represents excellent performance b. B- Represents good performance c. C- Represents average performance d. D- Represents poor performance e. E- Represents failing performance A student can be assigned a number corresponding to a percentage performance scale. a. 100 percent—Represents a perfect score. All assignments are performed correctly. b. 60–99 percent—Represents differing degrees of passing performance, each number representing the proportion of correct work. c. 0–59 percent—Represents failing performance but still captures proportion of correct work. Actually, this is not terribly different than a manager who must assign performance scores to employees. In each case, students with different marks are distinguished in some way. However, some scales may better distinguish students. Each scale also has the potential of producing error or some lack of validity. A researcher has to know what to measure before knowing how to measure something. The problem definition process should suggest the concepts that must be measured. A concept can be thought of as a generalized idea that represents something of meaning. Concepts such as age, sex, education, and number of children are relatively concrete properties. They present few problems in either definition or measurement. 4 5 Researchers measure concepts through a process known as operationalization. This process involves identifying scales that correspond to variance in the concept. Scales, just as a scale you may use to check your weight, provide a range of values that correspond to different values in the concept being measured. In other words, scales provide correspondence rules that indicate that a certain value on a scale corresponds to some true value of a concept. Hopefully, they do this in a truthful way. Sometimes, a single variable cannot capture a concept alone. Using multiple variables to measure one concept can often provide a more complete account of some concept than could any single variable. Even in the physical sciences, multiple measurements are often used to make sure an accurate representation is obtained. In social science, many concepts are measured with multiple measurements. A construct is a term used for concepts that are measured with multiple variables. For instance, when a business researcher wishes to measure the customer orientation of a salesperson, several variables like these may be used, each captured on a 1–5 scale: 1. I offer the product that is best suited to a customer’s problem. 2. A good employee has to have the customer’s best interests in mind. 3. I try to find out what kind of products will be most helpful to a customer. Constructs can be very helpful in operationalizing a concept. 6 Nominal scales represent the most elementary level of measurement. A nominal scale assigns a value to an object for identification or classification purposes only. The value can be, but does not have to be, a number because no quantities are being represented. In this sense, a nominal scale is truly a qualitative scale. Nominal scales are extremely useful, and are sometimes the only appropriate measure, even though they can be considered elementary. For example: Suppose a soft drink company was experimenting with three different types of sweeteners (cane sugar, corn syrup, or fruit extract). The researchers would like the experiment to be blind, so the three drinks that subjects are asked to taste are labeled A, B, or C, not cane sugar, corn syrup, or fruit extract. Ordinal scales allow things to be arranged in order based on how much of some concept they possess. In other words, an ordinal scale is a ranking scale. In fact, we often use the term rank order to describe an ordinal scale. Research participants often are asked to rank things based on preference. So, preference is the concept, and the ordinal scale lists the options from most to least preferred, or vice versa. Five objects can be ranked from 1–5 (least preferred to most preferred) or 1–5 (most preferred to least preferred) with no loss of meaning. Interval scales have both nominal and ordinal properties, but they also capture information about differences in quantities of a concept. So, not only would a sales manager know that a particular salesperson outperformed a colleague, information that would be available with an ordinal measure, but the manager would know by how much. If a professor assigns grades to term papers using a numbering system ranging from 1.0–20.0, not only does the scale represent the fact that a student with a 16.0 outperformed a student with 12.0, but the scale would show by how much (4.0). Ratio scales represent the highest form of measurement in that they have all the properties of interval scales with the additional attribute of representing absolute quantities. Interval scales possess only relative meaning, whereas ratio scales represent absolute meaning. In other words, ratio scales provide iconic measurement. Zero, therefore, has meaning in that it represents an absence of some concept. An absolute zero is the defining characteristic differentiating between ratio and interval scales. For example, money is a way to measure economic value. 6 7 8 Discrete measures are those that take on only one of a finite number of values. A discrete scale is most often used to represent a classification variable. Therefore, discrete
scales do not represent intensity of measures, only membership. Common discrete scales include any yes-or-no response, matching, colour choices, or practically any scale that involves selecting from among a small number of categories. Thus, when someone is asked to choose from the following responses ■ Disagree ■ Neutral ■ Agree the result is a discrete value that can be coded 1, 2, or 3, respectively. This is also an ordinal scale to the extent that it represents an ordered arrangement of agreement. Nominal and ordinal scales are discrete measures. Certain statistics are most appropriate for discrete measures. Continuous measures are those assigning values anywhere along some scale range in a place that corresponds to the intensity of some concept. Ratio measures are continuous measures. Thus, when Griff measures sales for each salesperson using the dollar amount sold, he is assigning a continuous measure. A number line could be constructed ranging from the least amount sold to the most, and a spot on the line would correspond exactly to a salesperson’s performance. 9 Earlier, we distinguished constructs as concepts that require multiple variables to measure them adequately. Looking back to the chapter vignette, could it be that multiple items will be required to adequately represent job performance? Likewise, a consumer’s attitude toward some product is usually a function of multiple attributes. An attribute is a single characteristic or fundamental feature of an object, person, situation, or issue. Multi-item instruments for measuring a construct are called index measures, or composite measures. An index measure assigns a value based on how much of the concept being measured is associated with an observation. Indexes often are formed by putting several variables together. For example, a social class index might be based on three weighted variables: occupation, education, and area of residence. Usually, occupation is seen as the single best indicator and would be weighted highest. With an index, the different attributes may not be strongly correlated with each other. A person’s education does not always relate strongly to their area of residence. Composite measures also assign a value based on a mathematical derivation of multiple vari- ables. For example, salesperson satisfaction may be measured by combining questions such as, “How satisfied are you with your job? How satisfied are you with your territory? How satisfied are you with the opportunity your job offers?” For most practical applications, composite measures and indexes are computed in the same way.7 10 Reliability The degree to which measures are free from random error and therefore yield consistent results. An indicator of a measure’s internal consistency. Validity •The accuracy of a measure or the extent to which a score truthfully represents a concept. Sensitivity A measurement instrument’s ability to accurately measure variability in stimuli or responses. Generally increased by adding more response points or adding scale items. 11 Reliability is an indicator of a measure’s internal consistency. Consistency is the key to understanding reliability. A measure is reliable when different attempts at measuring something converge on the same result. Internal consistency represents a measure’s homogeneity. An attempt to measure trustworthiness may require asking several similar but not identical questions. 12 Validity is the accuracy of a measure or the extent to which a score truthfully represents a concept. In other words, are we accurately measuring what we think we are measuring? Achieving validity is not a simple matter. Researchers have attempted to assess validity in many ways. They attempt to provide some evidence of a measure’s degree of validity by answering a variety of questions. Is there a consensus among other researchers that my attitude scale measures what it is supposed to measure? Does my measure cover everything that it should? Does my measure correlate with other measures of the same concept? Does the behaviour expected from my measure predict actual observed behavior? The four basic approaches to establishing validity are face validity, content validity, criterion validity, and construct validity. There are many types of validity, including: Face Validity A scale’s content logically appears to reflect what was intended to be measured. Content Validity The degree that a measure covers the breadth of the domain of interest. Criterion Validity The ability of a measure to correlate with other standard measures of similar constructs or established criteria. Construct Validity Exists when a measure reliably measures and truthfully represents a unique concept. Convergent Validity Another way of expressing internal consistency; highly reliable scales contain convergent validity. Discriminant Validity Represents how unique or distinct is a measure; a scale should not correlate too highly with a measure of a different construct. 12 13 Population (universe) Any complete group of entities that share some common set of characteristics. Population Element An individual member of a population. Census An investigation of all the individual elements that make up a population. Sample A subset, or some part, of a larger population. 13 14 At a wine tasting, guests sample wine by having a small taste from each of a number of different wines. From this, the taster decides if he or she likes a particular wine and if it is judged to be of low or high quality. If an entire bottle were consumed to decide, the taster may end up not caring much about the next bottle! However, in a scientific study in which the objective is to determine an unknown population value, why should a sample rather than a complete census be taken? Pragmatic Reasons •Budget and time constraints. •Limited access to total population. Accurate and Reliable Results •Samples can yield reasonably accurate information. •Strong similarities in population elements makes sampling possible. •Sampling may be more accurate than a census. Destruction of Test Units •Sampling reduces the costs of research in finite populations. 15 16 Once the decision to sample has been made, the first question concerns identifying the target population. What is the relevant population? In many cases this question is easy to answer. Registered voters may be clearly identifiable. Likewise, if a company’s 106-person sales force is the population of concern, there are few definitional problems. In other cases the decision may be difficult. One survey concerning organizational buyer behaviour incorrectly defined the population as purchasing agents whom sales representatives regularly contacted. After the survey, investigators discovered that industrial engineers within the customer companies rarely talked with the salespeople but substantially affected buying decisions. For consumerrelated research, the appropriate population element frequently is the household rather than an individual member of the household. This presents some problems if household lists are not available. 17 Random sampling error is the difference between the sample result and the result of a census con- ducted using identical procedures. Random sampling error occurs because of chance variation in the scientific selection of sampling units. The sampling units, even if properly selected according to sampling theory, are not likely to perfectly represent the population, but generally they are reliable estimates. Systematic (nonsampling) errors result from nonsampling factors, primarily the nature of a study’s design and the correctness of execution. These errors are not due to chance fluctuations. For example, highly educated respondents are more likely to cooperate with mail surveys than poorly educated ones, for whom filling out forms is more difficult and intimidating. Sample biases such as these account for a large portion of errors in business research.The term sample bias is somewhat unfortunate, because many forms of bias are not related to the selection of the sample. 18 Random sampling errors and systematic errors associat
ed with the sampling process may combine to yield a sample that is less than perfectly representative of the population. The total population is represented by the area of the largest square. Sampling frame errors eliminate some potential respondents. Random sampling error (due exclusively to random, chance fluctuation) may cause an imbalance in the representativeness of the group.Additional errors will occur if individuals refuse to be interviewed or cannot be contacted. Such nonresponse error may also cause the sample to be less than perfectly representative.Thus, the actual sample is drawn from a population different from (or smaller than) the ideal. 19 In probability sampling, every element in the population has a known, nonzero probability of selection. In addition, a probability sample has an element of true randomness in the selection process. The simple random sample, in which each member of the population has an equal probability of being selected, is the best-known probability sample. In nonprobability sampling, the probability of any particular member of the population being chosen is unknown. The selection of sampling units in nonprobability sampling is quite arbitrary, as researchers rely heavily on personal judgment. Technically, no appropriate statistical techniques exist for measuring random sampling error from a nonprobability sample. Therefore, projecting the data beyond the sample is, technically speaking, statistically inappropriate. Nevertheless, as the “How Much Does Your Prescription Cost? It Depends on Who You Buy It From” Research Snapshot shows on prescription drug costs shows, researchers sometimes find nonprobability samples suitable for a specific researcher purpose. As a result, nonprobability samples are pragmatic and are used in business research. 20 As the name suggests, convenience sampling refers to sampling by obtaining people or units that are conveniently available. Judgment (purposive) sampling is a nonprobability sampling technique in which an experienced individual selects the sample based on his or her judgment about some appropriate characteristics required of the sample member. Researchers select samples that satisfy their specific purposes, even if they are not fully representative. The purpose of quota sampling is to ensure that the various subgroups in a population are represented on pertinent sample characteristics to the exact extent that the investigators desire. Stratified sampling, a probability sampling procedure described in the next section, also has this objective, but it should not be confused with quota sampling. In quota sampling, the interviewer has a quota to achieve. For example, the interviewer may be assigned 100 interviews, 75 with full-time students and 25 with part-time students. The interviewer is responsible for finding enough people to meet the quota. Aggregating the various interview quotas yields a sample that represents the desired proportion of each subgroup. 21 Possible Sources Of Bias Respondents chosen because they were: Similar to interviewer Easily found Willing to be interviewed Middle-class Advantages of Quota Sampling Speed of data collection Lower costs Convenience 22 A variety of procedures known as snowball sampling involve using probability methods for an initial selection of respondents and then obtaining additional respondents through information provided by the initial respondents. This technique is best used to locate members of rare populations by referrals. 23 All probability sampling techniques are based on chance selection procedures. Because the probability sampling process includes an element of true randomness, the bias inherent in nonprobability sampling procedures is eliminated. Note that the term random refers to the procedure for selecting the sample; it does not describe the data in the sample. Randomness characterizes a procedure whose outcome cannot be predicted because it depends on chance. Randomness should not be thought of as unplanned or unscientific—it is the basis of all probability sampling techniques. This section will examine the various probability sampling methods. The sampling procedure that ensures each element in the population will have an equal chance of being included in the sample is called simple random sampling. The simplest examples include drawing names from a hat and selecting the winning raffle ticket from a large drum. If the names or raffle tickets are thoroughly stirred and on the same size piece of paper, each person or ticket should have an equal chance of being selected. In contrast to other, more complex types of probability sampling, this process is simple because it requires only one stage of sample selection. Suppose a researcher wants to take a sample of 1,000 from a list of 200,000 names. With systematic sampling, every 200th name from the list would be drawn. The procedure is extremely simple. A starting point is selected by a random process; then every nth number on the list is selected. In stratified sampling, a subsample is drawn using simple random sampling within each stratum. 23 24 If the number of sampling units drawn from each stratum is in proportion to the relative population size of the stratum, the sample is a proportional stratified sample. In a disproportional stratified sample the sample size for each stratum is not allocated in proportion to the population size but is dictated by analytical considerations, such as variability in store sales volume. For example, although the percentage of warehouse club stores is small, the average dollar sales volume for the warehouse club store stratum is quite large and varies substantially from the average sales volume for the smaller independent stores.To avoid overrepresenting the chain stores and inde- pendent stores (with smaller sales volume) in the sample, a disproportional sample could be taken. 25 The purpose of cluster sampling is to sample economically while retaining the characteristics of a probability sample. Cluster Sampling is An economically efficient sampling technique in which the primary sampling unit is not the individual element in the population but a large cluster of elements. Clusters are selected randomly. 26 A researcher who must decide on the most appropriate sample design for a specific project will identify a number of sampling criteria and evaluate the relative importance of each criterion before selecting a sampling design. Selecting a representative sample is important to all researchers. However, the degree of accuracy required or the researcher’s tolerance for sampling and nonsampling error may vary from project to project, especially when cost savings or another benefit may be a trade-off for a reduction in accuracy. The cost associated with the different sampling techniques varies tremendously. If the researcher’s financial and human resources are restricted, certain options will have to be eliminated. A researcher who needs to meet a deadline or complete a project quickly will be more likely to select a simple, less time-consuming sample design. Advance knowledge of population characteristics, such as the availability of lists of population members, is an important criterion. In many cases, however, no list of population elements will be available to the researcher. This is especially true when the population element is defined by ownership of a particular product or brand, by experience in performing a specific job task, or on a qualitative dimension. A lack of adequate lists may automatically rule out systematic sampling, stratified sampling, or other sampling designs, or it may dictate that a preliminary study, such as a short telephone survey using random digit dialing, be conducted to generate information to build a sampling frame for the primary study. In many developing countries, things like reverse directories are rare. Thus, researchers planning sample designs have to work around this limitation. Geographic proximity of population elements will influence sample design. When population elements are unequally distributed geographically, a cluster sample may become much more attractive. 26 2
7 Internet surveys allow researchers to reach a large sample rapidly—both an advantage and a disadvantage. Sample size requirements can be met overnight or in some cases almost instantaneously. A researcher can, for instance, release a survey one morning and have data back from around the globe by the next morning. If rapid response rates are expected, the sample for an Internet survey should be metered out across global regions or across all time zones in a national study. In addition, people in some populations are more likely to go online during the weekend than on a weekday. If the researcher can anticipate a day-of-the-week effect, the survey should be kept open long enough so that all sample units have the opportunity to participate in the research project.
