Leaving Cert Standardisation

@SPC100 those are interesting speculations.
Using school historics became a political no-no after the UK debacle. We came up with a clever way round that. We would use the school historics but based on the Junior Cert achievement of the current year. Thus dealing with the criticism that this year's students were simply getting what past year students in the same school got.
But the chosen algorithm has not worked in the case of St Kilians for reasons which I am beginning to understand, and maybe it is a unique exception. The idea of using Junior Cert results was good but it should have been used to determine how much this year's cohort compares with the previous three years' cohorts i.e. to validate the use of school historics as originally proposed.
 
So it is exactly how we have worked it out.
To try and explain what happened I will use plausible numbers.
Let's say Kilians were 100% of normal in Junior Cert English, Irish, Maths and, say, French but they were 200% in German. Their composite score would then be 120% ((4 x 100 + 200)/5). So they are assumed to do 120% better than average in all subjects at Leaving Cert. Clearly that gives an unjustified advantage in all subjects except German but for German it gives a massively unjustified disadvantage. Now given the high level of gearing in translating marks into grades it is almost certain that in terms of Grades/Points that overall Kilians were badly cheated.
Maybe overall it did balance out for Kilians but I very much doubt it.
 
And as for the grind schools they probably have a point as well.
If Junior Cert predictions are understating the historic achievement of grind schools then they are being unfairly standardised.
After all we must assume that they are worth at least some of their fees and they do get their students to over achieve compared to their Junior Cert results. Indeed it may even have been poor Jumior Cert results that led the parents to do something about it.
It is becoming clearer and clearer to me that school historic results was the correct standardisation process. Junior Cert prediction might have had a role in deciding whether this year's cohort were inherently better or worse than historic cohorts and that could have been used to adjust the historic performance - but adjusted or otherwise it is historic performance that should have been used.
 
If it is a true statement that the school a student attends affects their results then the lack of modeling for a school effect will of course reduce accuracy.

The current model is based on averaging in all national past achievements between jc and lc.
 
Duke Using your working example, wouldn't we expect Irish schools to have an unfair advantage?

Edit:Assuming their english and maths is like everyone else.
 
Last edited:
Re 200pc in german/1.2 multiplier in your example - are you sure the models input is subject aware. Iirc the tech docs said that every additional subject made modeling too hard (especially as many subjects were only rarely taken).

I had the impression the input is junior cert score in irish, score in e, score in m, score in best other subject, score in best other subject.

And that the model was built based on those inputs, along with their l.c. per subject scores. And this is how the model can predict a score for a lc subject for a given jc performance.

This is what I mean by lack of distinction between a ten a1 jc cohort vs 5a1 jc cohort leading to a reduction in dynamic range. Or lack of recognition of the really high performing cohorts.
 
It's not so much the school affects the students results it is the students who select the schools which affect the school's results.
 
Duke Using your working example, wouldn't we expect Irish schools to have an unfair advantage?

Edit:Assuming their english and maths is like everyone else.
Yes, I expect an unfair advantage in all other subjects and an unfair disadvantage in Irish. Maybe the unfair advantage is outweighing the unfair disadvantage and that is limiting the complaints.
 
Yes that's my read of it. The two clear anomalies that it throws up are:
Schools with a particular aptitude for one subject get that aptitude averaged across all subjects. That on some very theoretical distribution could mean that overall CAO points are relatively unaffected. But I suspect that the loss of grades on the good subject is not compensated by the diluted gain spread on the other subjects. I presume that is the case with Kilians or they shouldn't have complained.
If a school is particularly placed to improve on ability implied by Junior Cert then standardisation by JC is unfair to them. There are grounds for believing grind schools are in this category.
 
Wow! Now I suspect there is a mixture of two effects. As a highly commercialised entity I would suspect that Teachers Assessments showed even more grade inflation than usual and they deserve to be punished for this. But more relevant is that JC performance is a poor indicator of the Institute's historic performance at LC. This is not going to end happily.
 
I think you missed my point.

Let's assume In a high performing school most of the students will have had 2 As in junior cert outside of Irish english maths.

Then assume that school is e.g. killian's and they now had 3 jc As per student.

The schools actual multiplier is no better. As the model only used top two. And doesn't know what subject it was scored in.
 
I accept your point and that's why I said "yes". I suppose I shouldn't have repeated my two central arguments which are separate from this particular point.
 
Thanks for confirming. Yes, that and the speculations comment previously threw me.
 
Last edited:
It's not so much the school affects the students results it is the students who select the schools which affect the school's results.

I think it's likely both. Obviously student likely has a larger effect. But it's easy to see how school can influence output (e.g. poor teacher, more homework, exam focus)
 

Appeal process implies class rank should have been maintained, and same school assesd mark should result in same final mark.

Iirc the school letter implied this was not the case for some students.

"Data checks will include a check to ensure that the rank order of the class group for the subject and level taken has been preserved in the standardisation process and that students placed on the same school-estimated mark in the same subject and at the same level taken by the school are conferred with the same calculated mark conferred by the department."
 
Last edited:
Quote from the letter: "How come some students awarded the same calculated mark can be given 2 entirely different grades where there is a deviation of 2 grade levels?"

so it seems something else unanticipated hapened here. The modelling came up with a very different answer for students with the same teacher estimated mark. But according to documentation, the model should have had the same effect.
 
I can’t understand how this happened. Maybe clerical error.
BTW the national grade inflation is stated as 4.4%. But how is this calculated?
Update:
I see on the DoE website that the 4.4% is not really grade inflation at all it is the increase in marks. Even that is a tad ambiguous. It either means, say, that average marks went up from 60% to 64.4% or from 60% to 62.64%.
I don't know how to calculate a grade inflation. I can calculate a concept of CAO points inflation. For high level maths this was 6.5%.
 
Last edited: