Constructing
nomological nets on the basis of process analyses to
strengthen CSCL research
Karsten Stegmanna
a
Department Psychology, LMU Munich, Germany
Article received 26
May 2014 / revised 6 August 2014 / accepted 29
September 2014 / available online 23 December 2014
Abstract
Due to the nature of
collaborative learning, realising perfectly controlled
experiments often
requires an unreasonable amount of resources and sometimes it
is not possible
at all. Against this background, I propose to augment as good
as feasible
experimental design with a nomological net of relations
between instructional
support (intervention), learning processes and learning
outcomes. Nomological
networks are known from construct validity. In construct
validity, the
relations between variables (e.g. group differences,
correlation matrices) are
used to provide evidence for the validity of a measure. By
adding multiple process
and outcome variables together with the corresponding
relations between
intervention, process and outcome, the validity of causal
relations found can
be strengthened. I suggest adopting quality criteria from good
research designs
to evaluate the nomological nets. The resulting net needs to
be (1) theory
grounded, (2) situational, (3) feasible, (4) redundant, and
(5) efficient. By
making these nomological nets explicit and by designing them
according to the
presented criteria, CSCL research becomes more potent: the
risk of inconclusive
results is reduced while results that form a consistent
nomological net can be
interpreted with a stronger confidence, even if the
experimental design has
some flaws. If this becomes standard in CSCL research, it can
be expected to
contribute significantly better to knowledge accumulation in
this area of
research.
Keywords: Construct Validity;
Nomological Net; Research Design;
Computer-supported Collaborative Learning (CSCL)
1.
The
role of process analyses in CSCL research
Approaches to
computer-supported collaborative learning (CSCL) are mainly
based on three
assumptions. First, collaborative learning outperforms (under
particular
circumstances, e.g. with specific support) other methods when it
comes to
learning outcomes. Usually, specific collaborative activities
like
argumentation (e.g. Clark, D'Angelo, & Menekse, 2009),
transactive
co-construction (e.g. Molinari et al., 2013; Weinberger &
Fischer, 2006),
reciprocal teaching (Palincsar & Brown, 1984) and
collaborative concept
mapping (van Boxtel, van der Linden, Roelofs, & Erkens,
2002) are
considered to be positively related to individual cognitive
processes of
learning.
Second, computer
support enables both certain learning activities (e.g.
simulation-based inquiry
learning; de Jong & van Joolingen, 1998) and more direct
support for
certain activities (e.g. scaffolds as an inherent, but adaptive,
component of
the learning environment; cf. Koschmann, 1994). Technology
enables natural
systems and phenomena that would otherwise be invisible and
therefore
impossible to be experienced (e.g. the heart of an engine or
magnetism on an
atomic level; cf. Fischer, Lowe, & Schwan, 2008). Technology
also enables
us to facilitate learning processes by different means, e.g. by
making various
resources accessible (e.g. Osborne & Hennessey, 2003),
scaffolding specific
individual processes like the construction of single arguments
(Stegmann,
Wecker, Weinberger, & Fischer, 2012), or offering ways to
communicate and
collaborate (Wegerif, 2002).
Third, the combination
of collaborative learning and technology can have positive
interaction effects
that go beyond the simple combination of main effects. On the
one hand, the
quality of collaborative learning processes is lifted through
adaptive scaffolds
that positively moderate the positive effects of collaborative
learning. On the
other hand, the effects of technology functions (like access to
various
resources) on learning outcomes are boosted through
collaborative learning (cf.
Weinberger, Stegmann, & Fischer, 2010).
Set against this
background, CSCL research aims to provide knowledge about how
technology can
support collaborative learning processes (and thereby learning
outcomes on an
individual as well as a group level; cf. Stahl, 2006) most
effectively. On the
one hand, the problems that may arise through collaboration or
the use of
technology have to be minimised, while, on the other, the use of
technology
resources and collaborative learning processes has to be
optimised. The effect
of CSCL on learning outcomes is therefore mediated by processes
that occur
during the collaborative learning phase. This general model can
be described in
a triangle of hypotheses (cf. Wecker, Stegmann, & Fischer,
2012; Fig. 1):
(a) instructional/technological support facilitates learning
activities; (b)
facilitated learning activities have positive effects on
learning outcomes; and
(c) mediated by learning activities, instructional/technological
support has a
positive effect on learning outcomes.
Figure 1:
(see pdf file) General triangle of hypotheses in CSCL research.
To test this triangle
of hypotheses and to allow researchers to infer causal-effect
relations, three
conditions must be fulfilled (cf. Cook & Campbell, 1979):
(a) when the
causing variable varies, the affected variable must vary too
(covariation); (b)
the cause must occur before the effect occurs (temporal
precedence); and (c) no
plausible alternative explanations exist. While the first two
conditions can be
reached in CSCL research rather easily, the third condition is
very difficult
to reach. CSCL research often takes place in field-like settings
and even
studies with a rather higher level of control (e.g., Weinberger,
Marttunen,
Laurinen, & Stegmann, 2013) are much less controlled than
classical
psychological experiments. The adherence to instructional
advice, for example,
is usually not enforced. Testing the effect of an intervention
on learning
activities is, therefore, experimentally a variation check, but
semantically
the test of whether the way the instruction is realised is able
to induce the
intended behaviour. Due to the nature of collaborative learning,
realising
perfectly controlled experiments requires an unreasonable amount
of resources
and sometimes it is not possible at all.
Just imagine a jigsaw
experiment (for a detailed description of the jigsaw method see
Aronson, 1978).
In a jigsaw script, the content to be learnt is split into, for
example, four
subtopics. Groups of four prepare one of the four topics and
finally four new
groups are formed with one learner from each of the previous
groups and
learners teach their subtopic to the other group members. In
this experiment,
individual learning, unscripted collaborative learning and
collaborative
learning using the jigsaw method are compared. In the individual
learning
condition, 32 subjects are enough if a large effect with 80%
power is expected.
In the condition with unscripted collaborative learning in
groups of four, the
number of subjects might be optimally, due to nestedness of
data, 128 learners in
32 groups. In the jigsaw condition, 512 subjects are required
due to the fact
that 16 subjects learn collaboratively together in 32 groups.
According to Maas
and Hox (2005), a number of more than 50 groups is needed for
acceptable
statistical multilevel analyses. With 64 groups across two
conditions, this
criterion is fulfilled. Finally, this simple one-factorial
design with three
conditions requires 672 subjects to find effects with large
effect size. And
still it would not be free of confounded factors, e.g. the type
of support is
confounded with group size. While a condition with unscripted
learners in
groups of 16 would be possible, a condition with jigsaw script
with groups of
four is not possible. This example illustrates the inherent
problems of CSCL
research in excluding plausible alternative explanations through
experimental
design. Against this background, I propose to augment as good as
feasible
experimental design with a nomological net of relations
(hypotheses) between
instructional support (intervention), learning processes and
learning outcomes.
Nomological networks
are known from construct validity (cf. Cronbach & Meehl,
1955). In
construct validity, the relations between variables (e.g. group
differences,
correlation matrices) are used to provide evidence for the
validity of a
measure. I like to utilise this idea to validate the (causal)
relation between
interventions, mediators and outcome variables. The smallest
nomological net
possible comprises just two variables and one relation, but does
not yet
additionally validate a causal relation. The net, however,
becomes stronger the
more ties and knots are part of the net. The stronger the net,
the stronger the
confidence in the validity of causal relations between
variables. The smallest
net that can increase the confidence in a causal relation in
CSCL research is
the triangle of hypotheses described previously with three knots
and three
relations. By adding multiple process and outcome variables
together with the
corresponding relations between intervention, process and
outcome, the validity
of causal relations found can be strengthened. The development
of such a
nomological net requires some general quality criteria that
allow evaluation of
the net. I suggest adopting quality criteria from good research
designs as
described, for example, by Trochim and Land (1982). According to
these authors,
the nature of good designs is (1) theory grounded,
(2) situational, (3) feasible, (4) redundant, and (5) efficient.
In the
following sections, I provide a short explanation of the
criteria and some
illustrating examples from CSCL research for each criterion.
2.1
Theory
grounded
The nomological net
needs to be theory grounded. For each of the relations, a
directional effect
needs to be explicable by a theory and may be backed up by
previous empirical
findings. The question, for example, concerning the extent to
which learners mutually
influence one another has attracted considerable attention in
CSCL research.
Particular focus has been placed on the degree to which groups
of learners
share a mutual understanding via social interaction.
Accordingly, attempts have
been made to quantify this process, which is referred to as
knowledge
convergence, based on analyses of text-based knowledge-building
processes. Mäkitalo-Siegl
and colleagues (2012) traced so-called knowledge pieces through
collaboration.
The transfer of knowledge pieces from one learner to another was
measured by
comparing the knowledge pieces mentioned by single learners
before, during and
after collaboration. The relations in the corresponding
nomological net would
be that collaborative learners share more knowledge pieces
during collaboration
and that these shared knowledge pieces are known better by group
members after
collaboration.
Weinberger, Stegmann
and Fischer (2007) presented an approach with additional
quantitative measures
of the convergence of prior knowledge, collaborative processes
and acquired
knowledge based on fine-grained (i.e. at the level of
inferences) analyses of
text-based data sources (pretest, text-based online discussion,
post-test). The
authors suggest using the variation coefficient of the number of
different
inferences within a group of learners (analogous to knowledge
pieces) as an
indicator of knowledge divergence. Applying these measures
provided insight
into the relationship between the processes and outcomes of
collaborative
knowledge construction (e.g., Weinberger, Stegmann, &
Fischer, 2005;
Zottmann, et al., 2013). The nomological net may comprise, for
example, the
relation that learners with high divergence during online
discussions (i.e.
contributing different as opposed to identical inferences) are
more likely to
share knowledge after collaboration than learners with high
convergence during
online discussions.
In real life, however,
not all assumed relations show up as expected in empirical
research. The
strength of the net is, of course, stronger if all hypotheses
made a priori
test successfully. In practice, a net needs to be adopted post
hoc to explain
the results at hand. In these cases, additional explanatory
variables and
relations may be added to achieve a consistent net. The new
relations need, of
course, to be theory grounded as well. By adding further
mediators, moderators
and/or suppressors, the results can form a consistent net again.
As a
by-product, the adaption of a net is a further development of
the initial
theory.
2.2
Situational
The
manipulated/measured variables as well as the relations between
them need to be
situational defined. The interventions, the process variables
and, to a large
extent, the outcome variables in CSCL research are highly
situational, i.e.
depend on the situation, content and context at hand. The
intervention is
usually one realisation out of an endless number of possible
realisations of a
specific theoretical principle (e.g. an implementation of the
jigsaw method;
cf. Wecker, 2013). The process and outcome variables may be
termed rather
general (such as “content quality”, “quality of argumentation”;
cf. Stegmann,
Weinberger, & Fischer, 2007), but the concrete
operationalisation requires
the inclusion of bottom-up criteria that derive from the
(learning) material
and raw process data. In most cases, especially in the case of
process
variables, general (i.e. less situational) measures of skills or
competences
(like those applied in PISA studies;
cf. Kobarg, Prenzel, & Seidel, 2011) are not suitable,
because they measure
features (abilities) of persons, not activities. It is,
therefore, necessary to
apply measures with a high content validity. Such measures
usually need to be
developed individually for each situation. It is necessary to
quantify the qualities
of collaborative activities with respect to multiple quality
dimensions. A
detailed description of such a Multidimensional Approach for the
Qualitative
Coding of Online Discussions (MAQCOD) can be found in Stegmann
and Fischer
(2011).
Collaborative (learning)
activities are, for example, often analysed in terms of the
content quality or
quality of the argumentation (e.g. Weinberger & Fischer,
2006). Depending
on the dimension, the grain size of the analysis needs to be
defined (cf.
Strijbos, Martens, Prins, & Jochems, 2006). The quality of
the
argumentation, for example, could be defined for the entire
discourse, single
messages or even single arguments. Which grain size is most
suitable depends on
the theoretically defined relationship between the quality
dimension to be
analysed and the intended learning outcome. If, for example, the
theoretical
model assumes that formulating arguments with grounds and
warrants is a core
collaborative learning activity, single arguments rather than
complete
conversations need to be the focus. Along with grain size, the
categories need
to be defined. Single arguments can, for instance, be coded in
terms of whether
they are grounded or not. The definitions of the dimensions,
grain size and
categories per dimension need to be carefully documented. This
may comprise:
segmentation rules and examples for the application of these
rules; the names
of dimensions and categories; rules about when to assign a
specific category;
and examples when a category applies and when it does not. This
documentation
forms the basis for an objective, reliable and content-valid
coding of learning
activities and, thereby, for the inclusion of process variables
in a
nomological net.
2.3
Feasible
The inclusion of
process variables in a nomological net requires that it is
feasible to extract
the variable from the recorded activities during CSCL. It is,
for example,
problematic to measure the depth of cognitive processing just by
analysing
written CSCL discourse data. Researchers may argue that a
sophisticated,
well-elaborated argument can be regarded as an indicator of deep
cognitive
processing. The argument, however, might be just “copied” from a
different
source (e.g. a learning partner or prior knowledge) without deep
cognitive
processing. Such a measure, therefore, does not have sufficient
content
validity to be included in a nomological net with the intended
function. This
is not an argument against the variable “depth of cognitive
processing” in
general. The requirement is to measure the variable in a
content-valid way,
i.e. as directly as possible. If data sources such as
think-aloud protocols are
available
(e.g. Stegmann, Wecker, Weinberger, & Fischer, 2012), it
might be adequate
to include such a variable in a nomological net.
2.4
Redundant
Like in a cockpit of
an aeroplane, central components of the nomological net might be
redundant,
i.e. implemented several times. In CSCL research, this
redundancy is often
regarded as a methodological challenge rather than a strength of
the research
design. An inherent feature of collaborative learning is the
nestedness of
learners in groups and in time. Learners are part of a group and
thereby
features of the group affect learning. In addition, the
knowledge and skills of
the single learner as well as of the group change over time
(Wise & Chiu,
2011). The activities of learners in a group are affected not
only by the
initial features of the single learners and the collaborative
learning phase,
but also by the activities that the single learner and the group
performed
previously. Furthermore, instructional support such as
collaboration scripts
affects the relationship between previous and current
activities.
The nestedness of
learners in groups and in time is not only an issue if
researchers aim to
understand why learners learn in a certain way; learners and
groups also change
over time. The Script Theory of Guidance (SToG; Fischer, Kollar,
Stegmann,
& Wecker, 2013), for example, assumes that instructional
support for
collaborative learning needs to be adapted consecutively to
ensure the optimal
fit between the skills of the single learners/the group and the
instructional
support. As a result, a nomological net may include relations
that reflect such
ideas but add time as a moderator of the effect of an
intervention on the
process of collaborative learning.
As already raised in
the jigsaw study example, (quantitative) research on CSCL often
requires many
more participants due to the issue that learners who learned in
groups cannot
be regarded as independent observations. From the viewpoint of
the nomological
net, the relation, for example, of an intervention at group
level on a specific
process is redundant within a group. For an intervention that
aims to
facilitate, for example, argumentation sequences (e.g.,
Stegmann, Weinberger,
& Fischer, 2011; Jeong, Clark, Sampson & Menekse, 2010),
a positive
relation between the intervention and the number of
argument-counterargument-syntheses sequences may be added to the
nomological
net at group level. On an individual level (i.e. the level of
group members),
positive relations between the intervention and the number of
contributed
counterarguments and syntheses may be added to the net.
Furthermore, scaffolds
examined in CSCL research often focus on specific processes that
occur multiple
times during collaborative learning. If an intervention such as
a collaboration
script aims to support the quality of each single argument (cf.
Stegmann,
Wecker, Weinberger, & Fischer, 2012) with respect to grounds
and warrants,
the effect is expected to show up on the level of each argument
(as the
probability that a claim is supported by ground and/or warrant),
on the level
of each individual learner (as the share of grounded/warranted
claims
contributed by an individual), and on the group level (as a
higher
argumentative quality of the discussion).
2.5
Efficient
An important aspect,
finally, is the efficiency of the testing of the relations
specified in the
nomological net. Efficiency is in general determined by two
factors: the
usefulness of a result and the extent of resources required to
reach the
result. This criterion seems to be contradictory to the examples
for the
previously described criteria. These criteria require rather
qualitative
analyses of processes on multiple levels including time series.
Many more
resources (i.e. technology to record data, space to archive the
data, manpower
to develop coding manuals and to analyse data) need to be spent.
In CSCL
research, however, the digital learning environment in which
learning
activities under examination usually take place can reduce the
amount of
resources. Especially assessing and analysing data are supported
by technology.
Technologies like iBeacon or active RFID chips allow learners to
be traced as
well as the interaction between them and artefacts to learn from
in the
context, for example, of museums. Eberle and colleagues (2013),
for example,
traced the activities of conference participants using active
RFID chips to
examine the relation between interaction between conference
participants,
planned future collaboration right after the conference and
collaborative
publications two years after the conference. In scenarios with
computer-mediated communication, the communication can easily be
recorded.
The opportunity to log
data in technology-enhanced learning environments can easily
produce a large
amount of data that exceeds the limits for meaningful human
analyses. The
development in the area of machine learning technology, however,
enables
researchers to train algorithms to – supervised or unsupervised
– analyse
digital data according to multiple dimensions such as quality of
argumentation,
content quality or emotions. To apply these algorithms to data
at hand, in a
first step, features of the learning processes need to be
extracted. In the
case of written discourse data, for example, the number of
specific words or
word pairs, the punctuation or the line length are extracted.
This step can be
easily performed using tools such as TagHelper (Rosé et al.,
2008) or lightSIDE
(Mayfield & Rosé, 2012). In a second step, these features
are used in
conjunction with a human coding that serves as training material
to build
models that are able to measure the respective quality.
Recently, Mu and
colleagues (2012) presented the ACODEA framework, which may
serve as a
blueprint on how to apply this technology in CSCL research. The
empirical
results presented by Mu and colleagues (2012) show that this
procedure enables
objective analyses of texts that were not previously used for
training to be
conducted. The results obtained were at the same level as those
produced by
human coding, and were in some cases even better than those
produced by
inter-human objectivity.
While the described
application of technology in the research process contributes to
a reduction of
required resources, I further argue that the usefulness of the
results is
increased by testing a comprehensive nomological net in
comparison to results
not embedded in a net. Testing the three types of hypotheses of
the general
triangle of hypotheses (cf. Fig. 1) increases the probability
that a study will
produce useful results (and not just because three times more
hypotheses are
tested). If all of the three hypotheses are significant, this
can be regarded
as a validation of the underlying theoretical model. However, if
one or two of
the three hypotheses fail (e.g. the relationship between
learning activities
and learning outcomes), but the others are significant, the
findings provide a
starting point for explanations that may improve the initial
theoretical model.
It is only if all of the hypotheses fail to be significant that
the empirical
results will be completely inconclusive regarding
generalisability and causal
relations. Nevertheless, a more in-depth analysis of learning
activities and
post hoc adaption of the nomological net still provides insights
into the mechanisms
of learning, regardless of significant effects on learning
outcomes.
3.
Conclusion
The general structure
of CSCL research can be described using the introduced triangle
of hypotheses.
Therefore, nomological nets are an inherent feature of CSCL
research. By making
these nomological nets explicit and by designing them according
to the
presented criteria, the research becomes more potent: the risk
of inconclusive
results is reduced while results that form a coherent
nomological net can be
interpreted with a stronger confidence even if the experimental
design has some
flaws. This, however, is by no means an argument to conduct
studies with an
easily improvable experimental design or to skip experimental
variation
completely. An as good as possible experimental design is the
basic
prerequisite for the nomological net to contribute to
strengthening the
confidence in causal interpretations of effects. The suggestion
to use a
nomological net as described is, nevertheless, not limited to
quantitative
research approaches. Some if not all relations might be examined
with
qualitative methods. The effect on the confidence in the
interpretation is the
same as in quantitative methods: it increases. Actually, I would
expect
quantitative and qualitative methods to be used in a
complementary way to form
nomological nets in CSCL research. The explication of the
nomological net,
therefore, should become obligatory in reports and presentations
on research in
CSCL. Studies that aim to provide evidence for causal relations
need to report
effects on processes and outcomes, not either or. The processes
have to be
analysed in a way that ensures content validity. The
(statistical) analyses
have to make use of the multilevel structure of the process
data. New
technologies have to be applied to cope with the vast amount of
data. If this
becomes standard in CSCL research, it can be expected to
contribute
significantly better to knowledge accumulation in this area of
research.
Keypoints
(4) redundant, and (5) efficient
References
Aronson,
E. (1978). The jigsaw
classroom. London:
Sage.
Clark,
D. B., D'Angelo, C. M.
& Menekse, M. (2009). Initial structuring
of online discussions to improve
learning and argumentation: Incorporating students' own
explanations as seed
comments versus an augmented-preset approach to seeding
discussions. Journal
of Science Education and Technology,
18,
321-333. doi:10.1007/s10956-009-9159-1
Cook, T.
D., & Campbell, D. T. (1979). Quasi-experimentation:
Design and analysis for field setting. MA: Houghton Mifflin.
Cronbach,
L. J., & Meehl, P. E. (1955). Construct validity in
psychological tests. Psychological
bulletin, 52(4), 281. doi:10.1037/h0040957
De Jong
& van Joolingen (1998). Scientific discovery learning
with computer
simulations of conceptual domains. Review
of Educational Research, 68, 179-201.
doi:10.3102/00346543068002179
Eberle,
J., Stegmann, K., Lund, K., Barrat, A., Sailer, M., &
Fischer, F. (2013). Fostering
learning and collaboration in a scientific community –
evidence from an
experiment using RFID devices to measure collaborative
processes. In N. Rummel,
M. Kapur, M. Nathan, & S. Puntambekar, S. (Eds.), To See the World and a Grain of Sand: Learning
across Levels of Space,
Time, and Scale: CSCL 2013 Conference Proceedings Volume 1
— Full Papers &
Symposia (pp. 169-175). International Society of the
Learning Sciences.
Jeong,
A., Clark, D. B., Sampson, V. D., & Menekse, M. (2010).
Sequential analysis
of scientific argumentation in asynchronous online
discussion environments. In
S. Puntambekar, G. Erkens & C. Hmelo-Silver (Eds.), Analyzing Interactions in CSCL: Methodologies,
Approaches and Issues.
Berlin: Springer. doi:10.1007/978-1-4419-7710-6_10
Fischer,
F., Kollar, I.,
Stegmann, K., & Wecker, C. (2013). Toward
a script theory of
guidance in computer-supported collaborative learning. Educational Psychologist, 48(1),
56-66. doi:10.1080/00461520.2012.748005
Fischer,
S., Lowe, R. K., & Schwan, S. (2008). Effects of
presentation speed of a
dynamic visualization on the understanding of a mechanical
system. Applied
Cognitive Psychology, 22(8), 1126-1141.
doi:10.1002/acp.1426
Kobarg, M.,
Prenzel, M., & Seidel, T. (2011). An
international comparison of science teaching and learning.
Further results from
PISA 2006.
Münster: Waxmann Verlag.
Koschmann,
T. D. (1994). Toward a
theory of computer support for collaborative learning. The Journal of the Learning Sciences, 3(3), 219-225. doi:10.1207/s15327809jls0303_1
Maas, C.
J. M., & Hox, J. (2005). Sufficient samples sizes for
multilevel modeling. Methodology,
1,
86–92. doi:10.1027/1614-1881.1.3.86
Mayfield,
E., & Rosé, C. P. (2012). LightSIDE: Open Source Machine
Learning for Text
Accessible to Non-Experts. In M. D. Shermis & J.
Burstein (Eds.), Handbook
of Automated Essay Grading (pp.
124-135). New York: Routledge.
Mäkitalo-Siegl,
K., Stegmann, K., Frete, A., & Streng, S. (2012).
Orchestrating
computer-supported collaborative learning: Effects of
knowledge sharing and
shared knowledge. In S. Abramovich (Ed.), Computers
in education (pp. 75-91). Commack, NY: Nova Science
Publishers.
Molinari,
G., Chanel, G., Betrancourt, M., Pun, T, & Bozelle, C.
(2013). Emotion
Feedback during Computer-Mediated Collaboration: Effects on
Self-Reported
Emotions and Perceived Interaction. In N. Rummel, M. Kapur,
M. Nathan, & S.
Puntambekar, S. (Eds.), To See the World
and a Grain of Sand: Learning across Levels of Space,
Time, and Scale: CSCL
2013 Conference Proceedings Volume 1 — Full Papers &
Symposia (pp.
336-343). International Society of the Learning Sciences.
Mu, J.,
Stegmann, K., Mayfield, E., Rosé, C. & Fischer, F.
(2012). The ACODEA
framework: Developing segmentation and classification
schemes for fully
automatic analysis of online discussions. International
Journal of Computer-Supported Collaborative Learning,
7(2), 285–305. doi:10.1007/s11412-012-9147-y
Osborne,
J., & Henessy, S. (2003). Literature
review in science education and the role of ICT: promise,
problems and future
directions. Bristol: NESTA Futurelab. Retrieved from
http://www.futurelab.org.uk/resources/publications_reports_articles/literature_reviews
/Literature_Review380
Palincsar,
A. S., & Brown, A. L. (1984). Reciprocal teaching of
comprehension-fostering and comprehension-monitoring
activities. Cognition
& Instruction, 1(2), 117-175.
doi:10.1207/s1532690xci0102_1
Rosé, C.
P., Wang, Y. C., Arguello, J., Stegmann, K., Weinberger, A.,
& Fischer, F.
(2008). Analyzing collaborative learning processes
automatically: Exploiting
the advances of computational linguistics in
computer-supported collaborative
learning. International
Journal of Computer-Supported
Collaborative Learning, 3(3),
237-271. doi:10.1007/s11412-007-9034-0
Stahl, G.
(2006). Group
cognition. Cambridge,
MA: MIT Press.
Stegmann,
K. & Fischer, F. (2011). Quantifying qualities in
collaborative knowledge
construction: the analysis of online discussions. In S.
Puntambekar, G. Erkens
& C. Hmelo-Silver (Eds.), Analyzing
interactions in CSCL: methods, approaches and issues
(pp. 247-268). New
York: Springer.
doi:10.1007/978-1-4419-7710-6_12
Stegmann,
K., Weinberger, A.,
& Fischer, F. (2011). Aktives Lernen durch
Argumentieren: Evidenz für das
Modell der argumentativen Wissenskonstruktion in
Online-Diskussionen [Active
learning by argumentation: Evidence for the model of
argumentative knowledge
construction in online discussions.]. Unterrichtswissenschaft,
39(3), 231–244.
Stegmann,
K., Wecker, C.,
Weinberger, A. & Fischer, F. (2012). Collaborative
argumentation and
cognitive elaboration in a computer-supported collaborative
learning
environment. Instructional
Science, 40(2),
297-323.
doi:10.1007/s11251-011-9174-5
Stegmann,
K., Weinberger, A., & Fischer, F. (2007). Facilitating
argumentative
knowledge construction with computer-supported collaboration
scripts. International
Journal of Computer-Supported
Collaborative Learning, 2(4),
421-447. doi:10.1007/978-0-387-36949-5_12
Strijbos,
J.-W., Martens, R. L., Prins, F. J., & Jochems, W. M. G.
(2006). Content
analysis: What are they talking about? Computers
& Education, 46(1), 29-48.
doi:10.1016/j.compedu.2005.04.002
Trochim,
W., & Land, D. (1982). Designing designs for research. The
Researcher,
1(1), 1-6.
van
Boxtel, C., van der
Linden, J., Roelofs, E., & Erkens, G. (2002). Collaborative
Concept Mapping: Provoking and Supporting Meaningful
Discourse. Theory
Into Practice, 41(1),
40-46. doi:10.1207/s15430421tip4101_7
Wecker,
C. (2013). How to support prescriptive statements by
empirical research: Some
missing parts. Educational
Psychology
Review, 25(1),
1-18. doi:10.1007/s10648-012-9208-9
Wecker,
C., Stegmann, K., & Fischer, F. (2012). Lern-
und
Kooperationsprozesse: Warum sind sie interessant und wie
können sie analysiert
werden? [Learning and Cooperation Processes in Case-Based
Learning. Interesting
Issues and Analysis Approaches] REPORT:
Zeitschrift für Weiterbildungsforschung, 35(3), 30-41. doi:10.3278/REP1203W
Wegerif,
R. (2002). Thinking skills,
technology and learning: a review
of the literature for NESTA FutureLab.
Bristol: NESTA Futurelab.
Retrieved from:
http://www.futurelab.org.uk/resources/publications_reports_articles/literature_reviews
/Literature_Review394
Weinberger,
A., & Fischer, F. (2006). A framework to analyze
argumentative knowledge
construction in computer-supported collaborative learning. Computers
& Education, 46(1), 71-95.
doi:10.1016/j.compedu.2005.04.003
Weinberger,
A., Marttunen, M.,
Laurinen, L., & Stegmann, K. (2013). Inducing
socio-cognitive conflict
in Finnish and German groups of online learners by CSCL
script. International
Journal of Computer-Supported
Collaborative Learning, 8(3),
333-349. doi:10.1007/s11412-013-9173-4
Weinberger,
A., Stegmann, K., & Fischer, F. (2005).
Computer-supported collaborative
learning in higher education: Scripts for argumentative
knowledge construction
in distributed groups. In T. Koschmann, D. D. Suthers &
T.-W. Chan (Eds.), Computer
Supported Collaborative Learning
2005: The Next 10 Years! Proceedings of the International
Conference on
Computer Supported Collaborative Learning 2005 (pp.
717-726). Mahwah, NJ: Lawrence Erlbaum.
doi:10.3115/1149293.1149387
Weinberger,
A., Stegmann, K.,
& Fischer, F. (2007). Knowledge
convergence in collaborative learning:
Concepts and assessment. Learning and
Instruction, 17(4),
416-426.
doi:10.1016/j.learninstruc.2007.03.007
Weinberger,
A., Stegmann, K., & Fischer, F. (2010). Learning to
argue online: Scripted
groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26(4),
506-515. doi:10.1016/j.chb.2009.08.007
Wise, A.
F., & Chiu, M. M. (2011). Analyzing temporal patterns of
knowledge
construction in a role-based online discussion. International Journal of Computer-Supported
Collaborative Learning,
6(3),
445-470. doi:10.1007/s11412-011-9120-1
Zottmann,
J., Stegmann, K., Strijbos, J. W., Vogel, F., Wecker, C.,
& Fischer, F.
(2013). Computer-supported collaborative learning with
digital video cases in
teacher education: The impact of teaching experience on
knowledge convergence. Computers in Human
Behavior,
29(5), 2100-2108. doi:
10.1016/j.chb.2013.04.014