Assessing Acquisition

If you follow my blog, you’re probably used to reading this again and again by now:  When the input is comprehensible and compelling, chances are good that our students will acquire the language.

BUT HOW WILL WE KNOW FOR SURE?

Of course, Pearson has a device-based a$$essment for that.  But really for language acquisition, it’s pretty Old School.  You start with this Essential Question:

‘Do my students understand the message, and can they show me?’

So as you are chatting, asking story details, reviewing the facts, dramatizing, reading a leveled novel, viewing/narrating a video clip, etc., you insure that your students are comprehending language in real time.  How?  You teach to the eyes, you monitor individual and choral responses, you measure engagement (student posture, eye contact, appropriate reactions – laughter, surprise, rejoinders), you may occasionally even ask, “What does this (word) mean?” or have the group translate a sentence or passage into English.  These ongoing formative assessments insure that the input is always comprehensible. (See Teaching with Comprehensible Input Foundational Skills, here.)

Knowing that Comprehensible Input drives acquisition strongly suggests that the great majority of class time, particularly for novice-level learners, ought to be spent taking in the target language – either aurally or through reading (‘input’ or ‘receptive’ skills).  And yet, many novice level language assessments focus equally on writing and speaking, the two later-acquired ‘output’ or ‘productive’ language skills.  (For a discussion of input before output, read this.) During the first several hundred hours of instruction, students require copious amounts of compelling comprehensible input, and their progress, therefore, ought only be monitored through measures of comprehension.

Before digging into what an SLA-informed comprehension-based assessment for young novices might look like, I feel compelled to question our motive for formally assessing students before 5th grade, with, depending on the program offering, fewer than 300+ hours in the target language under their belts.  By formally, I’m referring to nationally normed foreign language assessments such as the ELLOPA (Early Language Listening and Oral Proficiency Assessment for grades PreK-2) or SOPA (Student Oral Proficiency Assessment for grades 2-8).  These are “…language proficiency assessment instruments designed [by the Center for Applied Linguistics or CAL] to allow students to demonstrate their highest level of performance in oral fluency, grammar, vocabulary, and listening comprehension.”  Another similar commonly used assessment tool is the Avant STAMP 4Se (Standards-based Measurement of Proficiency, grades 2-6):  “STAMP’s [computer interface] adaptive test design adjusts to a student’s level so s/he is challenged, but not overwhelmed.”  These instruments claim to dovetail with the American Council for Teacher of Foreign Languages (ACTFL) proficiency guidelines, yielding a ranking in each of the 4 language skills:  Listening, Speaking, Reading and Writing.  Such inventories also claim to help schools and language programs “refocus their curricula and introduce professional development to hone their teachers’ ability to deliver improved outcomes.” (STAMP 4Se)

BUT AGAIN, WHY WOULD WE ASSESS THE OUTPUT SKILLS [Writing and Speaking] OF OUR YOUNG NOVICES?

Instead, here are some teacher-made test item types that might comprise a comprehension-based assessment for the young novice language learner (acquirer):

•Listen to a prompt and circle the correct picture representation of it.

•While viewing an image or storyboard, re-order the pictures according to the Hebrew oral story or instructions, or answer Yes/No or Either/Or questions.

•Demonstrate comprehension through oral performance-based tasks such as those in a Total Physical Response (TPR) series (i.e., Simon Says) or Listen & Draw.

•Listen to a brief mini-story in Hebrew, and circle the correct facts, in English (this way the student demonstrates comprehension, not just recognition of similar Hebrew text/words).

•(For literate students):  Demonstrate comprehension by reading performance-based tasks (i.e., written instructions for drawing a picture).

What do all these novice level assessment items have in common?  They require comprehension of the aural or written message, but they don’t require speaking or writing (output).  They rely on language that the student has already been exposed to, but in novel contexts.  They don’t ask the students to produce language beyond their level of acquisition.  No oral interviews which presuppose facility and control at the discourse level.  The tasks and items are unrehearsed, not studied or practiced.  The assessment doesn’t emphasize grammatical accuracy or discreet vocabulary knowledge.  In these ways, students can demonstrate what they do know, and not feel anxiety or shame for what they don’t.

For the novice-mid level and up through the intermediate low (again on the ACTLF proficiency scale), teachers may ask students of say grades 6 and up to do prompted free writes, in which the students write in Hebrew as much as they can on a given topic, or retell a story that was generated in class.  Here, teachers simply count the Hebrew words (proper nouns like ‘Disneyland’ and ‘Barney’ are excluded), and watch the length of these writings grow over time.  They are not corrected, but rather assessed for comprehensibility and complexity, and may provide critical information to the teacher.  If the same written error is repeated by several students, the teacher may choose to include the difficult or confusing word chunk in her classroom banter and story-asking, in order for her students to hear it correctly,  repeatedly but without drilling, and in context.

Video-recorded student retells also provide insight for the conscientious teacher and her students.  While not all students need be recorded at each testing interval, the teacher may choose to collect such documentation to:  share with parents as a window into their child’s developing proficiency; study for patterns, holes and phenomena;  gather longitudinal data for comparison/documentation in a portfolio.  Such extemporaneous output-based assessments are not recommended for beginner novices who have yet to build a Hebrew language foundation.  Open-ended oral interviews are frustrating and discouraging for novice-low students.

Let’s review:

Our assessments ought to reflect what we’re doing in class (T/CI for novice through intermediate-level classes), and provide valuable feedback for informing and refining our instruction;

According to SLA research, we can’t expect our students to speak and write before they’ve had copious amounts of comprehensible input [“A flood of input for a trickle of output,” Wynn Wong];

By teaching with CI, our students develop spontaneous, unrehearsed, and fluent output.  Even our novice students create with language in response to our constant questioning, although it may only be in short-answer format;

Assessment that triggers the affective filter (i.e., anxiety) or discourages our ‘language babies’ is counter-productive for students and teachers alike;

Teacher-student interactions ought to focus on meaning, not form (grammar, syntax, morphology, phonology);

The best way to measure acquisition for beginners is to teach to the eyes, form a trusting community in which the affective filter remains low, and collaborate on compelling comprehensible input;

Early start-long sequence language programs are better in the long run, affording students a better ear and accent, more exposure to late-acquired features, and overall more time to acquire;

We can do our best to optimize the input, but we can’t rush Mother Nature!

2 thoughts on “Assessing Acquisition

    • Shalom, Gavriel. I’ve been reticent on the issue of formal assessment and grades because the BIG RESEARCHERS aren’t really on board for assessing and grading acquisition, certainly not for novices. I’ve heard Dr. Krashen say this when directly asked by an audience of language instructors: ‘Assessment and grades have no place in the language classroom.’
      Since we can’t know what’s going on in our students’ growing mental representation system, we can’t measure or quantify their acquisition. Further befuddling the issue is non-linear growth, the fact that acquisition is NOT a conscious process (it often happens without one’s knowledge, as in L1 acquisition), and great variance in individual rates of acquisition…

      Sooo, what do those of us do, who work in the real world of classrooms, parents, administrators and paychecks? We look to those activities that align with SLA, and we monitor student language growth over time. SLA is slow, piecemeal and stage-like (Dr. Bill VanPatten). We would prolly not take out our yardsticks until after our Ss had a few hundred hours or so…. Then, depending on age, target language, program details, and other possible factors, we may be able to ‘peek into’ our students growing acquisition by way of unrehearsed and unannounced ON -DEMAND timed and free writes or periodic video-recorded class and small group conversations, both reflecting a growing foundation in the target language, as well as increasing accuracy and sophistication, to try to get a snapshot of what’s been acquired….
      The fact is that we are constantly performing formative assessment – we check for comprehension to insure everyone is on board our comprehension train…
      For those of you interested in this topic, like Gavriel, I will pull together some assessment and grading links & resources to share. Luckily, in my elementary school setting, I don’t give grades, and only report on student progress once per year, using a narrative that I developed, with a simple rubric at the bottom reflecting acquisition-conducive behaviors. I will share that document as well.

Leave a Reply

Your email address will not be published. Required fields are marked *