| About CILT99 | Events and Presentations | Conference Breakout Reports | Presentation and Demonstration Abstracts | Sponsors |

CILT99 Posters

Assessments for Learning




Software Design Apprenticeships in Elementary Science Classrooms: Development of Evaluative Standards by Newcomers and Oldtimers

Yasmin B. Kafai, Cynthia Carter Ching
UCLA Graduate School of Education & Information Studies

Cathleen Galas
Corinne Seeds University Elementary School

Project-based learning approaches have become an umbrella term for reform-minded efforts in mathematics and science education. While many studies have demonstrated the academic learning benefits, another aspect of project learning which we call mindful practices has been more difficult to document. Mindful practices describe a rich set of practices such as students' evaluative standards, project management skills and tool competencies that develop through participation in communal activities. In our poster presentation, we will focus on students' development of evaluative standards within the context of a project-based software design activity. A class of 31 students composed of seven teams of fourth and fifth graders participated with their teacher in a three month-long science project in which they designed and implemented educational software simulations about neuroscience for younger students, third graders, in their school. Each team was composed of "oldtimer" students who had participated in a previous design project and "newcomer" students who were new and apprenticed by the oldtimers into the design project. In addition, eleven newcomers had served as software users when in third grade. Debriefing interviews were conducted with all students at the end about their project learning experiences. The answers are compared across the groups of users,.newcomers and oldtimers. Results indicate that oldtimer students have developed an expanded repertoire of evaluation standards when compared to newcomers. In the discussion we evaluate what we have learned about the nature of students' mindful practices and, in particular, their development of evaluative standards, and the design of a classroom learning environment to sustain the development of mindful practices through the apprenticeship model. We will also consider implications for the design of technology-based evaluation support.

Automated text analysis to assess cognitive outcomes of collaborative learning

Chris Teplovs, Darrell Laham, Carl Bereiter, Marlene Scardamalia, Peter Foltz and Tom Landauer.

Semantic analysis of texts based on word co-occurences is decades old. Until recently, however, this kind of analysis was used almost entirely for descriptive and indexing purposes. Armed with both statistical models and cognitive models more powerful than those available in the past, researchers have demonstrated an amazing range of scientific and practical applications of Latent Semantic Analysis (LSA), among which is assessment of individual levels of content knowledge (Deerwester et al. 1990; Landauer & Dumais, 1997; Kintsch, 1998; Landauer et al. 1998). Knowledge assessment begins by analyzing a body of texts representing known levels of knowledge in a domain (for instance, textbooks at elementary to advanced levels). This produces a many-dimensional semantic space, and subsequent analyses can situate students' texts in this space and provide a measure of semantic distance between a student's text and the various reference texts. Learning can be demonstrated by showing that texts produced later in a learning sequence are closer to more advanced parts of the semantic space. Whereas this kind of assessment has been applied to essay exams, texts derived from online discussions, problem solving, and the like represent additional challenges which telelearning researchers have recently begun to investigate. Among the problems are the quoting or paraphrasing of source material, which can produce spuriously high evaluations, and off-topic social and procedural discourse that might produce misleadingly low ones. The general strategy suggested by Landauer et al. for dealing with such problems is to do supplementary analyses aimed at measuring the problematic behaviors. In this way LSA can expand to yield additional variables of potential interest in their own right. In this poster, we present some early results on the use of LSA is assessing cognitive outcomes of online discussions.

References

Deerwester, S. Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990).

Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41: 391-407.

Kintsch, W. (1998). Comprehension: a paradigm for cognition. New York: Cambridge University Press.

Landauer, T. K. and Dumais, S. T. (1997). A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.

Landauer, T. K., Foltz, P. W. and Laham, D. (1998). An introduction to latent semantic analysis. Discourse Processes 25: 259-284.