Windows for Wales!

In 330 BC, Romans quaked at the name of Celt – skilled iron workers, fierce tall bearded fighters, sackers of cities. In fact, there’s a reason iron is called ‘iron’ and not ferrum in English. Isarn in Celtic became isen in Old English. cf. Isengard, and later iron. Celts gave the northern Europe the skill of ironworking. The etymology tells the tale of technological dissemination. Confer all the English words (many of which are Greek portmanteaux) floating around in Spanish and Chinese today. They show off the originators and disseminators of invention. What is a greater signal of inventive power than the penetration of the phrase “blue jeans”?

The Romans hated Celts and Gauls and thought their life and ways depraved. They took on no words from them. On the other hand, vae victis, the Gauls took up Latin with great aplomb and Gaulish writing died out shortly after Caesar’s arrival. They were “converted”. One Celtic the Welsh learned all sorts of new words from Rome though never sold out their language, and you can see from the list of acquired words what types of things the Welsh still needed to learn from the imperium:

mur – murus – wall
ffenster – fenestra – window
gwydr – vitreus – glass
cegin – culina – kitchen
cyllel – culter – knife
ffwrn – furnus – oven
seban – sapo – soap
ysbwng – spongea – sponge

Although the Welsh were Celts, they were not as sophisticated as the Gauls. Roman interest in the Britons and Welsh retreated by 410, leaving only windows and sponges and a smattering of Latin words behind… at least until the trade routes dried up a bit and with them the good sponges for feminine hygiene. And one other legacy left behind from that Roman era was kidnapped Romano-Briton who spoke poor Latin: we call him Patrick. The Celts were not a monolith, even if they liked the monolithic style. It was hard going in Britain after 400. But apparently the Welsh weren’t willing to throw away national character in the name of soap and sponges. Richard Rich, remember, was willing to throw away his soul for Wales. So he at least was attracted to them!

No wonder St. Augustine of Canterbury and his monks were so welcome when he arrived in Kent in 597. Did he bring sponges?

Sources:

Empires of the Word by Nicholas Ostler
Mary Beard, various
A Man For All Seasons by Robert Bolt

Traditional Report Cards Serve a Purpose

All the rage in educational assessment is mastery-based grading. Strongly-informed by the ethos of feedback and self-assessment, this method tries to be far more descriptive and qualitative than numerate in assessment. I think this is a good thing. Throwing a number on something doesn’t help anyone learn. Learning is a constructive feedback process. Without timely explanation of what was done and where or how it can be done differently, learning happens slowly at best.

However, I’m not convinced that removing all scalar modes of assessment in schools is the best thing, nor am I convinced that being more explicit all the time about subskills is super valuable. Although many people are enamored by the idea of generating more pro-learning, self-directed, and holistic assessment methods, I think it is important to understand the utility of the traditional method so that replacements still meet those use cases.

This is not a pros and cons list, but a description what the traditional A-F or percent scale or percentile method does.

  • 1) It compresses effort (homework completion) and skill (performance) into a single vector.
  • 2) It allows comparison of this number across time and across students, which has administrative and institutional uses (did they do poorly in 6th grade math? Helpful for 9th grade teacher to know in advance if quick and dirty information is available. Does this student generally get As or Cs? That’s a small bit of information and can mean many things, but does start ballparking the proper description of the student, even if it fails to identify that much about them.).
  • 3) It provides (lossy) feedback to the student on quality of the student effort and skill and thus serves loosely a reward/punishment mechanism.

Much of the utility is in how quick and dirty the traditional method of assessment is. You may notice that students often want the quick and dirty feedback to. Deciding not to give it to them, might encourage learning, but I don’t know. Oftentimes, the imperfect motives of students have to be bootstrapped into the ideal motives of learning and accomplishment, and allowing the existence of the imperfect might help in that?

In any case, I would like to see a report card system (in high school) that keeps the highly imperfect scalar, but allows one to expand several layers into the consistency, quality, and portfolio of the student in each area of assessment.

Kelly Smith has the following objections to traditional report cards:

Report cards have been a simple, easily digestible way for student parent and educator to see how a student is doing in school. That’s the good part.

Things I don’t like about the report card:
– educator as evaluator undermines connection [with students]
– ⁠grades are often subjective and loosely correlated to actual learning
– ⁠the letter grade in specific subjects is an incomplete look at what a student is becoming
– ⁠the finality of a semester grade undermines growth mindset

These are fine objections! Report cards do not measure learning; they also hardly help in it either! But that doesn’t mean they are broken.

I don’t wish to imply that any measure of learning, i.e. growth, is taking place in report cards! I don’t think it is. Although with enough stats regressions one can figure out the learning differential of a large enough sample size. I think if you want to measure learning, you want pre-tests and post-tests of various sorts that measure knowledge, implementation, extension, and transfer of concepts or skills. (That differential would be a measure of learning, and between that and a Learning Space chart you could do some “neat”, albeit trivial things showing how learning opens doors to new fields of study and practice.)

I like the mastery based grading systems. I also like to have one level more of aggregation than is advisable, because I believe in the utility of compression. I think many people who love innovation in education are attracted to information density and qualitative precision as a way to avoid the evils of reductionism. But I think a little reductionism is okay and positively useful, and students should be educated in the art of thinking numerically about qualities!

If I am being a bit trollish, so be it. I agree with the reformers that identification and legible assessment subskills are useful to teachers, to learning, and to those who care about having a more accurate picture of where the student is at.

However, here is my concern. The number of skills that make up any learning endeavor is very, very large and somewhat fractal. The skills we care about at any given stage are the ones that are not yet proficient. And so it is very difficult to pinpoint how learning can be usefully reported when the amount of information eligible for inclusion is so vast. In Pre-algebra I can list 30 novel sub-skills we learn off the top of my head. Can you imagine the example report card above with 30 lines for math? The question is: is the juice of learning metadata worth the squeeze on teachers, admin, and technology to generate and assess interminable lists of skills?

Currently, I think the answer is ‘no.’ But I look forward to being shown wrong, and I expect to be wrong one day. In the meantime, grades are lossy signals, have useful admin functions, and don’t exactly measure learning. Don’t take them too seriously, but do take them just seriously enough.