Nearly every organization collects survey data of some sort. Few actually know what they’re doing.
When my son was five, he took a short test to get into a bilingual school. The three of us — me, him, and an instructor from the school — sat outside as the instructor asked him questions from the test packet. There were questions about shapes and numbers, questions in Chinese and English, and a bank of questions that astounded me by its stupidity.
Approximately 20 questions, which amounted to about a fifth of the overall test, were in the following format:
“Answer the following question in a complete sentence. What’s your favorite kind of animal?”
Having just turned 5, my son had not had any formal schooling aside from a Montessori daycare program, and I doubt that he was even that familiar with the concept of a sentence. Needless to say, he did not understand what a complete sentence was. So he would give one-word answers like “dogs.”
As time went on, I realized I was watching a farce — she would ask him another “complete sentence” question, patiently wait for his one word answer, then mark that answer as incorrect.
A question like the one above might evaluate any number of different things:
- Does the child know what a complete sentence is?
- Can the child comprehend the question?
- Can the child give a reasonable response in the target language?
- Can the child form a grammatically correct sentence?
The problem is that questions (and batteries of questions) like these evaluate different things depending on the knowledge of the test-taker. If the test-taker knows what a complete sentence is, then whether they get each question right is about the content of the question and whether they take the trouble to form a sentence. If the test-taker doesn’t know what a complete sentence is, then the questions bundle together, measuring the same thing over and over and over again, yielding no new information.
With my son, they got one bit of information from those 20 questions: that he didn’t know what a complete sentence was. And of course, because the overall test score was the summation of “correct” answers, all of those one-word answers resulted in zero additional points.
Alternative approaches could yield more interesting information. One approach would be to just change how answers are evaluated. Instead of a single “correct/incorrect” evaluation for the question, the examiner could mark whether the child tried to answer in a complete sentence, whether the sentence was grammatically correct, and whether they gave a reasonable response to the question. An answer like, “My favorite animal is a triangle” would endear me to the child, but also suggest there’s some comprehension issue or perhaps wilful incorrectness going on.
A more interesting approach would be to note down that he didn’t answer the first two or three questions with a complete sentence (which would establish that he didn’t know what a complete sentence was), and then teach him what a complete sentence was with a few examples. Then ask him the remaining question battery to see if he could apply this new idea (the “complete sentence” idea) to new examples. As a measure of learning and language comprehension, this approach is probably more informative — the test-taker has to understand a new concept described in the target language (in this case, Chinese or English) and then apply that concept using that target language.
The instructor who tested my son probably didn’t realize that I was evaluating the school at the same time she was evaluating my son. I was not particularly keen on sending him there after watching them make such a basic measurement mistake.
Whether a child knows the academic formalism of “answer in a complete sentence” might be a valuable thing to know. It’s probably an indication of prior formal schooling and also suggests that a child is ready to analyze sentences: what’s grammatical and what’s not; how verbs and nouns and adjectives and prepositions work with each other to create meaning. But you don’t need 20 questions to know this.