Duke of Marmalade
Registered User
- Messages
- 4,687
I attach the report on Leaving Cert standardisation. It is not bedtime reading. But it does give some insight into the algorithm (they don't call it that). Appendix G gives the technical details. It is really wonkish, I suspect deliberately so; it would be a brave soul who would challenge the math.
There follows my broad understanding of the approach, but I would welcome clarification/correction.
There are basically two inputs:
(1) The Teachers' Assessments
(2) Predictions from Junior cycle for that school and subject based on a regressed fitting of past Leaving Cert results to past Junior cycle performance. (Note: regression is based on results from all schools and subjects, and includes other predictors but the Junior Cert results correlation with Leaving Cert results dominates).
These provide two probability distributions for each school and subject combination (cell).
The distribution used for standardisation is a mixture of the two. The proportion of that mixture which is Junior Cycle Prediction is decided by how many students are in that cell. The formula for deriving that mixed distribution is where it gets really wonkish but it gives the example that if there were only 6 in the cell no credibility would be given to Junior Cycle Prediction (i.e. the Teachers' Assessments would be accepted without adjustment) and vice versa, if the cell was large a greater credibility would be given to the Junior Cycle Prediction. Unfortunately we are not told the limit of this credibility but I doubt it exceeds 50%. Having combined the distributions in this way the students are fitted into this standardised distribution (for that cell) based on the marks given by the Teachers' Assessment. So if someone is middle in the class they would be given the mid point of this standardised distribution. Note that the individual's own Junior Cycle performance has no role whatever in his/her final mark. The Junior Cycle Prediction is applied at the school level.
Now it is recognised that Teachers assessments were over optimistic and I assume the Junior Cycle Prediction was quite accurate in producing the historical averages. So the less credibility given to the Junior Cycle Prediction the better the chances of enjoying the optimism of the Teachers' Assessment.
These are some examples:
Maths
21,552 sitting
Historical Average H1s 5.8%
Teachers' Assessment 11.6%
Standardised 8.4%
So that sort of stacks up with an average 50% credibility being given to the maths Junior Cycle Prediction
Arabic
155 sitting
Historical Average H1s 17.1%
Teachers' Assessment 34.8%
Standardised 34.8%
Suggesting no credibility given to Junior Cycle Prediction
Latin
48 sitting
Historical Average H1s 19.2%
Teachers' Assessment 43.8%
Standardised 41.7%
Overall, I think any ambulance chasers will find it difficult to pick holes in this, though there may be a role for math expert witnesses
There follows my broad understanding of the approach, but I would welcome clarification/correction.
There are basically two inputs:
(1) The Teachers' Assessments
(2) Predictions from Junior cycle for that school and subject based on a regressed fitting of past Leaving Cert results to past Junior cycle performance. (Note: regression is based on results from all schools and subjects, and includes other predictors but the Junior Cert results correlation with Leaving Cert results dominates).
These provide two probability distributions for each school and subject combination (cell).
The distribution used for standardisation is a mixture of the two. The proportion of that mixture which is Junior Cycle Prediction is decided by how many students are in that cell. The formula for deriving that mixed distribution is where it gets really wonkish but it gives the example that if there were only 6 in the cell no credibility would be given to Junior Cycle Prediction (i.e. the Teachers' Assessments would be accepted without adjustment) and vice versa, if the cell was large a greater credibility would be given to the Junior Cycle Prediction. Unfortunately we are not told the limit of this credibility but I doubt it exceeds 50%. Having combined the distributions in this way the students are fitted into this standardised distribution (for that cell) based on the marks given by the Teachers' Assessment. So if someone is middle in the class they would be given the mid point of this standardised distribution. Note that the individual's own Junior Cycle performance has no role whatever in his/her final mark. The Junior Cycle Prediction is applied at the school level.
Now it is recognised that Teachers assessments were over optimistic and I assume the Junior Cycle Prediction was quite accurate in producing the historical averages. So the less credibility given to the Junior Cycle Prediction the better the chances of enjoying the optimism of the Teachers' Assessment.
These are some examples:
Maths
21,552 sitting
Historical Average H1s 5.8%
Teachers' Assessment 11.6%
Standardised 8.4%
So that sort of stacks up with an average 50% credibility being given to the maths Junior Cycle Prediction
Arabic
155 sitting
Historical Average H1s 17.1%
Teachers' Assessment 34.8%
Standardised 34.8%
Suggesting no credibility given to Junior Cycle Prediction
Latin
48 sitting
Historical Average H1s 19.2%
Teachers' Assessment 43.8%
Standardised 41.7%
Overall, I think any ambulance chasers will find it difficult to pick holes in this, though there may be a role for math expert witnesses
Attachments
Last edited: