test whitepaper

Page 1

forward thinking series

Join the Research Participant Led Open Ended Questions

IS-2008-002


2

Join the Research Participant Led Open Ended Questions Paper submitted to the special issue of the International Journal of Market Research on Web2.0 and Social Networks.

Summary / Abstract Recent Internet developments allow relying on the shared intelligence of groups for market research. We illustrate two applications in which users create content from their responses to open ended questions. Both the ‘user created brainstorm’ and ‘user coded open end’ procedure prove useful for market research. We discuss the outcomes and show that the social and collaborative aspects of the applications positively influence user evaluations.


3

Introduction The latest Internet developments have introduced new dynamics among consumers as they allow participation, information sharing, social networking and user collaboration & creation (Huang and Behara 2007; Dearstyne 2007). Shirky (2008) has stated that new and social media allow people and groups to self-assemble and organize information without need for much supervised efforts. Due to expressive capabilities and self-synchronization informal networks are capable of doing things which formal systems cannot. Similarly, our society relies more than ever on “We-think” or the power of shared intelligence and peer judgments to decide what is good (Leadbeater 2008). Due to the collaborative nature of digital technology enhanced creativity results from how we think together as groups. In this line of thinking and based on the concepts of the wisdom of crowds (Surowiecki 2004), Kearon (2005 and 2008) applies this trend to market research, e.g. concept testing. Based on the technique of predictive markets he provides evidence for the fact that a diverse crowd outperforms a classic more rigorous sampling approach. Hence, that shared intelligence is useful for market research. In this paper we apply this reasoning to open ended questions. Quantitative surveys often contain open ended questions to measure consumers’ spontaneous reactions (e.g. likes and dislikes, top of mind ideas about a topic). Open questions have several experiential, quality and economical drawbacks, however. They are run in silos and individual contributions are isolated from others’ input, hence not allowing peer-to-peer iteration and inspiration. Considering the trend of ‘self-assembly’ and ‘we-think’, this approach is doomed not to benefit from group creativity. The completion of a traditional open end is always a snapshot, however, a reflection of ideas that cross participants’ mind at the very moment a measurement question is presented to them. That instant answer may be incomplete. If participants would be exposed to answers of others they may agree with additional ideas not initially thought of and thus generate additional insights. Next, considerable portions of respondents leave open ended boxes blank indicating they have ‘no idea’ or nothing to add. The question remains to what extent participants really have no idea, are just unable to come up with an answer or are unwilling to put any effort in the task. Such situations are opportunity costs and a loss of data. Analyses of open questions

are also subject to biases (e.g. coder bias due to skills, mood, fatigue, knowledge, order effects …). Ideally the analysis procedure follows established qualitative analyses techniques (e.g. predefined codes, multiple coders) (Miles and Huberman, 1994). Still, analysts always have to make subjective assumptions as to what respondents really wanted to say and timing or budgetary constraints do not always allow an iterative well-grounded process. Considering the innovative Internet behaviors (e.g. user generated content, sharing, tagging & digging) many of these drawbacks can be resolved if one offers opportunities for user led open ended questions. The rest of this paper is organized as follows. First, we will introduce a framework for user led open ended questions. Next, we describe two applications: a user created brainstorm and a user generated coding procedure. For each application results and conclusions are reviewed.

User-led open questions: ceding control to participants We have developed two similar which allow users to create content from their own qualitative inputs such that it becomes quantitative and structured information. In user led open questions participants go through three phases which are fully in line with the current context of social collaboration (Jaffe 2007): •

Creation: Researchers initiate the process by asking participants’ opinions, which they voice and extend upon in multiple open text boxes. Contextualization: participants add meaning, relevance, specificity and interpretation to their initial content via tagging and categorizing. Propagation: the categorized and labelled content is shared with others, and participants ‘dig’ into their answers. This means they add importance to their own input and that of peers.

The first application is meant for participants to brainstorm and think interactively about a wide variety of topics (User Created Brainstorm). In traditional surveys people are prompted to provide their top of mind ideas but are never stimulated to think their answers through and contrast them against the input of others. The User Created Brainstorm collects a diversity of new ideas from different people with


4

varied backgrounds and takes them through a social iteration process. In a first step, participants are invited to list as many of their own ideas as possible about a certain topic (creation). The elicited content is than contextualized by asking each contributor to review if his own ideas are already in the list provided by other participants. If (s)he has brought up something new, the new idea can be added to the overall listing. Finally, participants are invited to indicate or ‘dig’ all the ideas they like (propagation). The second instrument allows participants to code their own open responses into structured categories which they themselves define (User Coded Open Ends). The User Coded Open Ends also consists of three steps. Participants first provide their opinion about a concept, idea or stimulus. In other words, they create their feedback just as they would do in a conventional open question. For the contextualization participants are invited to interpret and tag their own answers by indicating the core ideas in their response. These core ideas or user generated codes can then either be assigned to existing core ideas created by peers or added to the list if the core idea forms a new category. Finally, participants reassess the original question by checking the user generated codes of others they (also) agree with (propagation). In other words, participants are presented a list of core ideas as if it was an aided closed ended question. This last step is conducted by all participants including those who initially had no idea, and hence should lead to additional inspiration and data. We will now illustrate both techniques in 2 studies, describe the method, data-collection, discuss the results and evaluate the methods.

Study 1: User-created brainstorm – case ‘the television of the furure’ In a first application participants were involved in an online brainstorm session about the ‘television of the future’. Next to testing the outcome and user experience of the User Created Brainstorm, our goal was to assess to what extent good ideas were generated by the crowd. Background questions determined the expertise and creativity levels of respondents. If the wisdom of crowds theory was valid here, the best ideas would come from a variety of people, not only the experts or creative (Surowiecki 2004). In other words, the number of good ideas generated would not be correlated with expertise or creativity.

Data collection procedure and measurement Data were collected via the opt-in internet access panel of XL Online Panels (www.xlonlinepanels.com). A representative sample in terms of the age and gender of the Flemish population was drawn to ensure heterogeneity. In total, 121 people participated (48% female – 52% male; 20% < 35 years old – 50% between 35 and 54 years old – 30% older than 55%). Respondents completed an online survey containing regular survey questions and the brainstorm application. A first part related to the usage and attitudes of television, and measured frequency of watching television, involvement with the current TV medium, as well as interest in future developments. Respondents also graded themselves on a series of items measuring creativity. The distributions and psychometric characteristics of these measures are illustrated in Table 1 and 2.


5

Table 1 – Descriptive sample on affinity with television

Frequency watching television

Everyday

65%

5 to 6 days a week

22%

3 to 4 days a week

7%

1 to 2 days a week

4%

Less than 1 day a week

1%

Table 2 – Descriptive statistics and psychometric properties of TV involvement and creativity 2

Descriptive 1 distributions

I am aware of the current programs on T.V. I find it important to have a recent model of television I could not live without my television I am interested in digital television I am fascinated about how television will change the future I am interested in Internet television People often come to me when they need new ideas I easily find several solution to one and the same problem I am good at brainstorming I am generally considered as a creative person

Principal components Interest Involvement future Creativity current T.V. T.V. Cronbach' s = 0,61

Cronbach' s Cronbach' s = 0,88 = 0,84

Bottom 2

Top 2

20%

60%

0,69

50%

21%

0,56

37% 21%

36% 59%

0,88

21%

59%

0,87

22%

44%

0,98

26%

26%

0,73

11%

58%

0,84

11%

54%

0,87

11%

57%

1

5 point Likert scale: Completely Disagree to Completely Agree

0,82

2

Factor loadings based on Promax rotation.

0,82

KMO-test = 0,80 - Total Variance Explained 69%


6 Next, participants completed brainstorm (Figure 1). •

the

three

step

Participants were asked to provide their ideas about new TV content, as well as product features, without taking technological possibilities or cost implications into account. They were allowed to give five ideas. Participants that did not have any ideas could check the ‘no idea’ box. The second screen displayed the participant’s own ideas on the left while on the right the complete list of ideas generated by peers who had already participated were presented. Participants reviewed their ideas in the context of the existing list. If an idea was already present in the study list, no action was needed. If the participant felt (s)he had come up with a new idea they could add it to the list. The elicited answers are available to all subsequent participants in the study, including typos,

irrelevant or inappropriate answers. In order to cope with this, a mechanism for ‘social correction’ was built in. Via an ‘editing and flagging system’ participants had the opportunity to flag each other’s answers, indicate what was wrong, and send this feedback to the researcher. •

In the last screen, all participants (including those who originally could not think of any ideas) were presented with the final list of ideas. The order in which the ideas were presented was randomized to exclude order effects. Participants screened and checked those ideas that appealed to them most.

The end of the survey included evaluation questions about the user-created brainstorm (e.g. satisfaction with the platform) as well as an of ‘2.0’ elements (e.g. social editing system, peer input).

Figure 1 – ‘User-created brainstorm’ procedure

! " #

$ ! !" #


7

Software development & usability testing The User Created Brainstorm application was developed internally at InSites Consulting (www.insites-consulting.com). Programming was performed in ASP.Net and MS-SQL databases for data storage. The MS-SQL database consisted of multiple tables to store user labels, respondent information, number of answers shown, number of added categories, number of categories selected... The regular parts of the survey before and after the brainstorm session are programmed using web survey software. Via a number of unique identification keys the data from the survey software and the User Created Brainstorm are linked. The user experience for participants of the regular survey and brainstorm was the same, however, as they could not notice the switches between platforms. Figure 2 illustrates the platform by means of screenshots.

The technical performance and usability of the platform were investigated in multiple rounds. To ensure all participants understood the tasks well, 9 usability sessions with each time 4 participants were conducted with people with relatively low internet skills, elderly and lower educated people. For the final test here, all respondents were asked to rate the usability of the platform on several aspects. 94% of the participants pointed out that they understood the task immediately (87% top 2 boxes & 7 % neutral). We performed a content analysis on the remaining 7 participants. Only one participant reported technical problems and was removed from further analyses. The other participants went through the procedure correctly despite their low usability evaluation.

Figure 2 – ‘User created brainstorm’ screenshots

%


8

Data analysis and findings Brainstorm results In total 106 different ideas were generated by 95 individuals and 25 participants indicated that they had ‘no idea’. All respondents were invited to evaluate the final idea list. Only one person who initially had ‘no idea’ still did not check (‘dig’) an answer in the aided list of ideas at the end. In order to select the best ideas, sampling accuracy needs to be taken into

account. New ideas generated by the first respondents were available in the final idea list to the majority of the sample. Hence, these ideas had a higher chance of being propagated than new ideas that were generated by the last participants. In order to correct for this bias, we adjusted the percentage of participants that dug the idea in the propagation phase for the standard error and sample size1. The results are shown in figure 3.

Figure 3 – ‘User created brainstorm’ results

Detail Figure 3

1

For every idea, the error margin was calculated given the sample size and at a 95% confidence level. This error margin was subtracted from the percentage of participants that dug the idea in the last propagation phase, resulting in a corrected propagation score.


9 Based on the results, we can determine four types of idea. •

Winners are ideas created early in the brainstorm and shown to a large group of people. They were also ‘dug’ by a large crowd, meaning they have high recognition and agreement as being a good idea. Hence, we are confident that these are popular ideas. Examples in this case are using the TV as computer, remove advertising, video communication, internet and email via TV, wireless TV, all programs subtitled and newspaper and magazine distribution via TV. Concepts generated in the beginning but selected by only few subsequent participants have low potential or are niche applications. Examples are more theatre, chat, 3D-TV. Challengers: ideas shared among only a small group of participants. Within this smaller group, however, the ideas were well received. They are thoughts that have high potential but are still somewhat risky because they were not presented to a large sample. One might consider additional research if some suggestions are appealing to managers. Examples are free television, school television, attending concerts.

High-risk ideas have been produced at a late stage of the study and have not been picked up by the small group of participants that were subsequently exposed. Therefore they should serve only as a source of inspiration. Here, too, managers may consider additional in-depth research.

Social elements as drivers of participant satisfaction Overall the survey experience was well-received with an average satisfaction score of 7.5 on a 10-point scale. Based on a one sampled t-test with internal survey satisfaction benchmark data the user created brainstorm significantly outperforms general surveys2. Relying on the collective wisdom and shared intelligence of participants seems to be an added value from a participant perspective as well (Table 3). The sharing and digging of ideas was appreciated by the majority of participants and participants agree with the fact that the social aspect of the brainstorm allows them to give their opinion more adequately. While both of these elements drive participant’s survey satisfaction the social aspect is the most important one for enhancing a joyful survey experience.

Table 3 - Social elements as drivers for satisfaction

2

Descriptive 1 distributions

I am aware of the current programs on T.V. I find it important to have a recent model of television

Principal components Interest Involvement future Creativity current T.V. T.V.

Bottom 2

Top 2

Cronbach' s = 0,61

20%

60%

0,69

Cronbach' s Cronbach' s = 0,88 = 0,84

50%

21%

0,56

I could not live without my television

37%

36%

0,88

I am interested in digital television I am fascinated about how television will change the future

21%

59%

0,82

21%

59%

0,87

22%

44%

0,98

26%

26%

0,73

11%

58%

0,84

11%

54%

0,87

11%

57%

I am interested in Internet television People often come to me when they need new ideas I easily find several solution to one and the same problem I am good at brainstorming I am generally considered as a creative person 1

5 point Likert scale: Completely Disagree to Completely Agree

2

0,82

Factor loadings based on Promax rotation. KMO-test = 0,80 - Total Variance Explained 69%

2 Internal benchmark data are based on more than 1 million observations of survey satisfaction spread over a 1 year period in 2007 – 2008. Two-tailed significance test statistics: t-value = 3,18; df = 163; p = 0,002.


10 Although a large group of participants saw the social control symbol (62%) and 81% of the participants indicated they would use the social feedback mechanism if needed, we did not get any requests for adaptation in our final test. About 72% of the contributors mentioned they were happy to help the researchers with reporting mistakes. All of this evidence provided us support for the fact that our tool works as well and as intended.

crowds’ hypothesis as good ideas were created by all types of contributors. In other words, it does make sense for companies to involve ‘lay people’ in online brainstorms because they, too, may come up with winning ideas.

Study 2: Giving participants the power to analyze their own open answer Background and objectives

Wisdom from the crowds The corrected propagation of an idea was taken as a measure of the quality of an idea. A series of bivariate Pearson correlation analyses were conducted between the corrected propagation and the expertise, involvement and creativity of the contributor. No significant correlations were found with the degree of affinity with television, nor with self-reported creativity (Table 4). We also performed a difference testing to compare the quality of ideas generated by individuals scoring low, medium or high on affinity with television. We did not observe any significant difference. A similar analysis was conducted for the creativity index: again the success rate of ideas generated by participants with high creativity did not differ from those produced by less creative individuals. The lack of correlations and significant differences supports the ‘wisdom of

This study was conducted in cooperation with Sara Lee/Douwe Egberts. A new coffee concept (cold coffee in a can – see figure 4) was presented to a sample of participants. Next to traditional concept test questions, respondents were asked to give their likes and dislikes about the product concept in an open ended question. Our goal was to investigate to what extent participants are able to code their own open answer. Therefore, we compared respondent coding with results from traditional manual coding by researchers and coding via a text analysis software package for coding open questions. Next, we investigated the social dynamics of exposing participants to opinions of other users. We analyzed to what extent participants also dug answers of other participants in the propagation phase. Similarly to the analysis of the User Created Brainstorm the user experience and survey satisfaction was assessed of the user led coding system.

Table 4 – The crowd generates idea

Pearson correlation coefficient with quality of ideas Involvement current T.V. I am aware of the current programs on T.V. I find it important to have a recent model of television I could not live without my television

Interest future T.V. I am interested in digital television I am fascinated about how television will change the future I am interested in Internet television

Creativity People often come to me when they need new ideas I easily find several solution to one and the same problem I am good at brainstorming I am generally considered as a creative person

0,03 -0,02 0,03 0,04

0,12 0,07 0,11 0,15

-0,06 0,06 -0,15 -0,07 -0,03 & '(')


11

Figure 5 – Concept board: ‘Coffee in a can’

*

(

,

+ - ./

(

$

$ $

0 - ./ 1 $

$

0

(- . / 0

,

$ 0(

2

,

0

$ - ./

''

Data collection procedure and measurement Data were again collected via the opt-in Internet access panel of XL Online Panels. A representative sample of the Flemish population was invited to participate. In total 203 persons participated (47% female - 53% males; 22% < 35 years old – 44% between 35 and 54 years old – 34% older than 55). The survey started with classical concept test questions like appeal, usefulness, (un)priced buying intention etc. Participants were subsequently redirected to the 3-step user led coding scheme. Figure 5 illustrates the procedure. •

Creation: participants described their likes/ dislikes about the concept in a classic open ended question. Respondents had the opportunity to indicate that they had no positive or negative remarks about the concept. Contextualization: in a second step, the answer of the participant was shown again and each participant had to reformulate their answer into core ideas as short verbatims. Every core idea was then displayed, together with the list of existing categories generated by other participants. If the participants found that their core idea belonged to one of the existing

3

0

categories, it was assigned to that category. As in traditional coding, this step allowed an answer to belong to several categories at once. If the answer did not fit into one of the existing categories, participants could create a new answer category. This implies that participants code their own input while still keeping the original input on the open-ended questions. Propagation: the final coded list of categories was presented to participants. They had the ability to select other coded variables that they had not originally thought of, but with which they also agreed. This last step was also presented to participants who did not contribute any idea at first.

The survey ended with the same evaluation questions as those used for the user-created brainstorm.


12 Figure 5 – Procedure ‘User Coded Open Ended Questions’

4 $

$

$

5

-

"

667

# ( 3 !

-

5

6-

+

6 6 6*

3

Software and usability test This application was also developed internally at InSites Consulting, using the same programming and relational database software. (See Figure 6 for illustrative screenshots.) Since a good understanding of the task is crucial for the reliability of the results, we performed a series of usability sessions (9 mini groups with 4 participants per session) on the platform with a similar group of Internet users. We asked participants at the end of this survey to report their understanding of the user

8

$

led application. The participants indicated that they understood the task immediately (86%) and that it was clear that the answer categories were created by other participants (82%). A total of 68 (dislikes) and 69 (likes) of the 203 participants claimed they did not to have any negative or positive point. Among the other persons who completed the contextualization phase 22% (dislikes) and 23% (likes) added at least one core idea as a new category. For the dislikes and likes respectively 83% and 81% indicated that at least one of the core ideas belonged to an existing category. These last numbers are an indication that participants understood the task.

Figure 6 – Screenshots of the ‘User –coded open-ended questions’

%


13

Data analysis and findings User-coded open end results In order to assess the quality of the user generated coding, we compared the results of the User Coded Open Ends with the results of traditional manual coding and coding via a specialized software for text mining open ended questions (SPSS Text Analysis for Surveys). The results are displayed in figures 7, 8 and 9. First of all, we observe that the number of core ideas extracted in the user led application is larger than in the other applications (likes: 253, 141, 154 – dislikes 239, 181,221; respectively User Coded Open End, text mining and manual coding). It seems participants extract more information out of their own initial answer than researchers or text mining modules do. Secondly, when comparing the number of categories resulting from each categorization we note some differences. User based coding of dislikes leads to somewhat more categories than the other two methods (18 for text mining, 20 for traditional coding and 31 for user based coding). The number of categories for likes with user coding lies in between the other two methods (18 for text mining, 23 for traditional coding and 19 for user coding). This may be indicative of the fact that participants decided (especially for the dislikes) to split categories in more subcategories. The top 3 product likes are the same

Figure 7 – Tag clouds of ‘User-coded open ends’

across all methods: “the refreshing coffee taste” in a “nice” and “convenient” packaging. Similarly, the two main dislikes are the same across methods (namely “cold coffee” and “the idea of drinking coffee out of a can”). There are several categories that were extracted by participants and not by text mining and/or human coders (again especially for dislikes), while the other way around occurs less frequently. From a qualitative perspective it even appears that the codes that were not extracted by respondents but did appear in the other methods, may be due to just different interpretations (i.e. omitted or more general). ‘Unhealthy’ connotations for example do not appear under this general label with users while they do associate issues like ‘garbage’, ‘too sweet’, ‘makes me nervous’ or ‘gain weight’. In other words, the ‘2.0’ user generated coding seems more complete and granulated in that users may extract a more detailed set of categories. Another difference is that the user led coding extracts more ‘emotional’ answers, e.g. perceiving the product as ‘tasty’ or voicing a dislike about “yet another new (marketing) product”. Since this information is not really actionable, it is often left out of the analysis in traditional coding or even text mining. All in all, these data indicate that our user generated coding method clearly stood the test against alternative methods and even provided somewhat more detail.


14 Figure 8 – Coded dislikes

0'

E

-

9: -

?

;D

-

;

?

@ E $

@ E $

>

E $ >

B ,

$ G

, $ ? 5 > =

-

1

% %

F

$ ,

< %

B F

: 9

$

$ ,

G

, $ -

, $ ?

?

; 5

% % %

5

> =

-

> =

) 1

-

1 $

; ) ) :

$

G

1 ,

>

9

B

< D 9 9 : : : ; ; % %

$

<

?

;

@

F

;D

-

%

)

,

1

1

$

%

,

$

; ,

,

,

$

$

,

$

,

,

E $

E $

-

E $

-

-

/ =

/ =

-

/ =

-

-

> F

> F

-

> F

; ; ; %

-

)

-

%

Figure 9 – Coded likes

0'

E

-

-

9

= >

$

9 : : : ) ; %

= > 1 - .

.+ ?

@

$

/ ?

=

= > 1

D

.+ ?

- .

@

A

9

.+ ?

@

;

A

-

-

;

)

B

B /

/

7 ?

@

: % ;

7

9 9

?

-

@

-

@

% -

+

'

;

1 - .

-

9 >

-

-

) )

: : ) %

7

$

%

< < <

B

%:

>

%

-

A -

%

=

%%

>

-

-

%

=

%

-

-

+

<

-

+

' 7 ?

7

?

>

>

?

>

>

>

>

> / $ C

:

7

>

>

$

/ $ C

$

/

) $ C

$

;


15 The collective coding procedure also gives participants the opportunity to dig answers from other respondents in the last stage. The analysis revealed that only 13/68 and 17/69 participants (for respectively likes and dislikes) could still not think of any issues even with a list of aided categories. This implies an inspiration or propagation effect of over 80%. The counts in the propagation stage were again corrected for sampling accuracy for further analyses3. The combination of the results from the spontaneous reactions and the inspiration lead to additional insights for marketers and researchers. In the case of ‘likes’ it gives insights in the unique selling points (USP) which can be used in e.g. communication or packaging. For dislikes, it reveals the action points for improvement. We can distinguish 4 quadrants of opinions. The results are depicted in figure 10 and 11 where we depicted the spontaneous outtakes

Figure 10 – User-coded ‘like’ quadrants

3 (aided % in propagation phase corrected for sample size and error margin.

(unaided response from the creation phase) are mapped against corrected propagation. The font size of the codes refers to the sample size at the moment the statement was launched.


16 •

Natural USPs/rejections: Main likes/dislikes found in the right upper corner. They are spontaneously and immediately mentioned and selected by a large group of participants. These inputs are valuable for the main part of the population or large barriers and reasons not to buy. Example: ‘coffee is cold’ as a main dislike. Recognition USP’s/rejections: These ideas are spontaneously mentioned by a smaller group but ‘dug’ by a reasonable crowd and are therefore worth taking into account. These are ideas that consumers do not always spontaneously think of but often agree with when exposed (e.g. when other consumers mention them). In the case of benefits, these ideas will also be useful in communication aimed at recognition. In the case of dislikes, it is often negative points that are not apparent at first and only a real barrier to a few but can easily expand to a larger group e.g. after product trial, word of mouth, press and other buzz. Examples are ‘prefer regular coffee’ and ‘ideal during summer’.

Figure 11 – User-coded ‘dislike’ quadrants

Targeted USP’s/specific barriers: This cluster of ideas was spontaneously brought up by a relatively small group of participants and not very frequently selected by others. These are characteristics that appeal to a niche. In the case of a dislike it can be a barrier for a limited group of respondents. Examples for this coffee concept are ‘relaxing’ and ‘café frappé – Greece’ A final group of ideas has few spontaneous counts and almost no recognition. These ideas are of less or even no importance.


17 Social elements as driver for satisfaction The average survey satisfaction of the User Coded Open End was 6.9 on a 10-point scale. While lower than the evaluation of the User Created Brainstorm and not significantly different from our benchmark data, the majority of the participants was again positive about the sharing and ‘digging’ part, they trust the content from others and do not consider the user generated coding as too much effort (see table 5). Again, being exposed to the answers of others, as well as the feeling participants have that they are able to better express their opinion via user coding, drives survey satisfaction. More than in the case of the brainstorm application, the intrinsic motivation of participants drives survey satisfaction. Still, researchers should be aware of participants’ motivation and not blindly apply such procedures to entire samples and for every study. Participants who feel it is a lot more work tend to be significantly less satisfied in the end. The social control system was again not used by any of the contributors in our final test, while participants indicated they would use them if necessary.

Table 5 – Social elements as drivers for satisfaction

OLS Regression Analysis

Descriptive 1 distributions Bottom 2

Top 2

9%

61%

7%

2

R² = 0,37 Standardized

t-value

p-value

0,33

4,50

0,00

60%

0,22

2,84

0,01

70%

3%

-0,07

-0,97

0,33

I find it a lot of work to fill questions in this manner

50%

17%

-0,21

-2,94

0,00

I am influenced by the answers of other participants I believe that all answers are truly coming from other participants to the survey

70%

6%

0,06

0,91

0,36

11%

41%

0,01

0,19

0,85

I have the feeling I have really been able to provide my opinion It is fun/interesting to to see the list of answers of other survey participants I dislike the fact that my responses can be shown to other participants

1

5 point Likert scale: Completely Disagree to Completely Agree

2

Constant term included = 3,2 with p = 0,00 No multi-collinearity problems revealed based on VIF-test.


18

Conclusions We have shown that user led open ended questions work well with respondents, generate useful insights for market research with the advantage that almost no human intervention is needed. Our analysis of the User Created Brainstorm illustrated that general samples are able to create as many ideas as expert or creative samples, supporting the wisdom of crowds. The User Coded Open End showed robust results against manual human coding or coding with text-mining with the advantage that input is coded they way participants intended it to be. For brainstorming as well for self-coding the peer-topeer character seriously reduces the non-response on open questions and even enriches the data. Participants have a better survey experience due to the social collaboration (they like to see the answers of other participants) as well as the self-expressive capabilities (they feel they were able to really express their opinion) of the user led open questions. This finding is important as it confirms our tools are in line with the philosophy of current social media where participants join researchers and companies in providing new ideas, analyzing and interpretation of the results. In the brainstorm procedure the social aspect of being stimulated by others’ answers proved to be the most important factor for the overall evaluation, while for the open ended coding the driver was much more intrinsic. Compared to average surveys the user brainstorm method even proved to generate higher satisfaction rates. For the user coded open ends on the other hand satisfaction ratings were not different from average surveys. Clearly one of the reasons for this finding was that the method is elaborate and that for participants this may become too heavy. Obviously the procedures may lengthen survey fill-out time because extra questions are asked. Hence, there is a need for procedures and processes to handle these issues before it can be fully implemented in practice. Suggestions for improving user led coding may be to apply these techniques to subsamples of participants, combine it with traditional coding procedures and not to blindly integrate it into each and every survey. The

implications of this demand future research, however. There are still other limitations which need further research. Not all participants like social collaboration. Their issues with the new applications need to be investigated further to improve respondent experience. We have only provided analyses on an aggregate level while a more in-depth analysis is needed to assess the difference between manual coding, text mining and user generated coding at a verbatim level. Next, user generated content creation is language dependent which may make merging data from multiple countries cumbersome. All things considered, we encourage researchers and practitioners to apply the content creation and social collaboration frameworks. After all, participants put a lot of effort in voicing their opinion. It is a sign of respect that researchers use this information to its maximum capacity and with the correct interpretation just as participants meant.

Acknowledgements We would like to thank Arnold Tromp from Sara Lee/Douwe Egberts for their support in this study in providing stimulus materials, as well as Jeroen van Godsenhoven and Wim Van Driessche at SPSS Belgium for providing a trial licence for STAFS.

About the authors Annelies Verhaeghe R&D consultant, InSites Consulting Prof. Dr. Niels Schillewaert Associate Professor in Marketing, Vlerick Leuven Gent Management School Managing Partner, InSites Consulting Tom De Ruyck R&D consultant, InSites Consulting

InSites Consulting R&D White Paper series Through its R&D department, InSites Consulting regularly publishes white papers related to various methodological and/or marketing content issues, aiming to provide you with relevant and up-to-date marketing (research) insights that are based on scientifically grounded methods. Our white papers result from research data collecting by InSites Consulting itself, by cooperation with third parties (e.g. universities or business schools), or by cooperation with InSites Consulting customers. While each white paper has a scientific flair, it essentially offers you applicable insights on specific marketing research subjects, in a crisp format and lay-out. For additional questions, suggestions, or further readings, please do not hesitate to visit us at www.insites.eu or contact us on info@insites.eu - +32 9 269 15 00.


19

References Abiven, F. & Labidoire E. (2007) Second Life. A Tool to Collaborate with the Consumer. Esomar Congress Excellence 2007. Berlin, pp. 585–596. Anderson, C. (2004) The Long Tail: Why the Future of Business is Selling Less of More. Wired, October 2004. New York: Hyperion. Comley, P. (2006) The games we play. A psychoanalysis of the relationship between panel owners and panel participants. Esomar world research conference Panel research 2006. Barcelona, pp. 27-29. Dearstyne, B. (2007) Blogs, Mashups, & Wikis? Oh, My! The Information Management Journal, 41, 4, pp. 24– 33. du Perron, B. & Kischkat A. (2007) Digital Consumer Connections. Esomar world research conference Qualitative 2007, pp. 200–212. th

Gadeib, A. & C. Genter (2007) Joining the 4 Dimension. Esomar world research conference Qualitative 2007, pp. 187–199. Giles, J. (2005) Internet encyclopedias go head to head. Nature. 438, 531, pp. 900-901. Hamilton, J., Eyre L., Tramp, M., Galarneau, L. & Vriens, M. (2007) Why do some online communities work. Revealing the secrets with social and cognitive psychology. Esomar world research conference Qualitative 2007. Paris, pp. 12-14. Huang, D. & Behara R. (2007) Outcome-Driven Experiential Learning with ‘web’. Journal of Information Systems Education, Vol. 18(3), pp. 329-336. Jaffe, J. (2007) Join the conversation. How to engage Marketing-Weary Consumers with the power of Community, Dialogue, and Partnership. Hoboken, New Jersey: John Wiley & Sons. Kearon, J. (2005), A Fresh Approach to Concept Testing. How to get More Research for Less Time and Money, Esomar Congress 2005 Making a Difference. Kearon, J. (2008), Predictive Markets – Is the Crowd Consistently Wise? From: http://www.brainjuicer.com/download/papers/Predictive%20Markets.pdf – last accessed 24 June 2008. Laurent, F. & Beauvieux, A. (2007) Listening instead of asking. How blogs provide a new way to better understand market trends. Esomar world research conference Qualitative 2007. Paris, pp. 12-14. Leadbeater, C. (2008), We-think: The Power of Mass Creativity, Profile Books, London. Miles, M. & Huberman, A.M. (1994) Qualitative Data Analysis: An Expanded Sourcebook. CA: Sage. O' Reilly, T. (2005) What Is ‘web 2.0’: Design Patterns and Business Models for the Next Generation of Software. O'Reilly net. From http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web20.html?page=1. Poynter, R. & Lawrence G. (2007) Insight 2.0. New Media, New Rules, New Insight. Esomar Congress Excellence 2007. Berlin, pp. 597 – 608.


20

Puri, A. (2007) The web of insights. The art and practice of webnography, International Journal of Market Research, 49, 3, pp. 387 – 408. Reinhold, N. & Bhutaia, K. L. (2007) The Virtual Home Visit, Esomar world research conference Qualitative 2007, pp. 28 – 41. Shirky, C. (2008), Here Comes Everybody: The Power of Organizing Without Organizations, Penguin Press, NY. Surowiecki, J. (2004) The Wisdom of Crowds. Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. Brown: Little. Tapscott, D. & Williams, A.D. (2007) The peer pioneer. In Wikinomics, how mass collaboration changes everything. Chapter 3. P 73-77. Trowbridge, Wiltshire: The cromwell Press.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.