2002 Seed Grants

Title: Performance Assessment as an Anchor for Cross-Site Investigations of Customization

PIs: Matthew W. Brown, University of Illinois, Chicago; Britte Cheng, University of California, Berkeley

Other collaborating institutions:University of California, Berkeley; University of Georgia; Wheeling Jesuit University; Michigan State University

The goal of this project is to create a community of researchers dedicated to a multifaceted investigation of customization across multiple sites. Our collaboration relies on assessment as a primary vehicle for understanding the implications of customization of student learning.

The context for this collaboration will be investigations of student modeling in astronomy. The group will collaboratively construct a shared template for a computer-based modeling activity for high school astronomy classrooms, implement customized versions of it in several classrooms across sites, and evaluate the outcomes of student learning in each case. An additional task will be to develop common research instruments and assessments. In addition to collaborative inquiry into the domain issues, our larger research goal will be to examine the processes of and drivers for local customization and explore the implications of this process for student learning, as evidenced in patterns of assessment outcomes. In particular we are interested in performance assessments in that they provide a promising way of documenting the dynamic practices involved in students' use of computer-based modeling (Haertel & Means 2000).

Final Report


Title: Learning Via Distributed Dialogue: Livenotes and Handheld Wireless Technology

PI: John Canny, University of California, Berkeley

Other collaborating institutions: University of Washington, Seattle

We aim to contribute insights into how to measure the effects of distributed dialogue on learning to the Technology in Learning Assessments theme. In recent years, whiteboard technologies have been increasingly tested as a means of collaborative learning, but few metrics exist to characterize the dynamic nature of shared note-taking. Our software platform, Livenotes, differs from other whiteboard technologies in using a synchronous, handheld wireless format that is readily portable inside and outside classrooms. Students can engage in peer-to-peer dialogue, directly asking each other questions about a lecture or presentation, drawing diagrams, discussing a professor's overheads, commenting on each other's ideas, and sharing information. This differs greatly from other approaches to collaborative learning that do not rely on distributing input between students or on handheld styli, or on combining handwritten text and drawing.

Our focus is, therefore, to develop metrics that can capture the learning that happens through the process of distributing dialogue, and that can be generalized to other settings that make use of distributed dialogue. These metrics will not focus on formal educational test approaches, but will be based on the ways in which students interact in patching together input on the Livenotes whiteboard in real time. In particular, we compare metrics for learning via distributed dialogue in two different contexts, namely drawing in a design class at the University of Washington and dialogue in an education class at the University of California at Berkeley.

Full details of Livenotes can be found at: http://newmedia.colorado.edu/cscl/225.html

Final Report


Title: Assessment of Problem-Solving Competence Via Student-Constructed Visual Representations of Scientific Phenomena

PI: Jerry P. Suits, McNeese State University

Other collaborating institutions: Better Education, Inc.; Southern University, Baton Rouge; SRI International; UCLA; Vanderbilt University

Students in chemistry, physics, and engineering courses frequently attempt to solve quantitative problem-solving tasks without the aid of visual imagery that is used by scientists and engineers to solve these tasks. Technology can be designed to provide a learning environment that supports students as they sketch a drawing of the problem representation. This mode of technology-based assessment scaffolds learning as it occurs along a trajectory that is contingent upon student decisions while being constrained by both the visual-to-symbolic instructional sequence and corrective feedback, which is given as needed.

The objectives of this seed grant are:

  1. To explore how technology-based assessment tools can be designed to support student-constructed visual representations of quantitative problems based upon scientific phenomena.


  2. To forge a new collaborative venture among physics educators, engineering educators and chemical educators in order to encourage cross-fertilization and future collaborations.


  3. To evaluate the effectiveness of the primary objective in designing visual representations that might promote equity in scientific problem-solving competence.


Final Report