Wednesday, September 19, 2012

Strange Maths!


Internal evaluation=30= 5+ 15 + 10, is a simple enough equation. Most of us who have done their preliminary 3R’s should be able to understand that. According to guidelines followed the initial 5 marks is for attendance in a course over the semester. 15 marks are allocated for two of class tests (not conducted by the university, but by the constituent colleges). The third component of 10 is for class performance, including how the student performed in assignments given during the semester.

With an objective evaluator, a class of 60 students can be expected to get 0, 30 and everything in between. As theory tells us, the distribution of the marks will be well distributed and will follow a “normal” distribution. This is a bell-shaped curve with a peak at the average and trailing off on both ends. Fewer and fewer students would obtain high marks tending towards 30, and the number of students obtaining lower than average numbers also will similarly trail off. In fact, any evaluation of any group of individual should follow this pattern. During my engineering training, I had 40 odd years back, and passage through my professional life reinforced these ideas. This is one issue that came up often during yearly appraisal time. I invariably found the concept to be applicable, in general.

However, 40 odd years is a lot of time. Things change. I know meaning of something does depend on the context. However, I never expected a concept like this to be dependent on the context so much. What was shocking was to discover that it all came down to economics!

Over the intervening period, engineering colleges were allowed to be privately run, “for profit”.  When this sector opened up, a lot of these colleges came up. According to a recent count the total number, all over the country, is close to 4000. They were doing quite well until a few years back. However, over the last three years these colleges are facing a crunch in admissions. One full batch contributes to a healthy cash flow for next three years. Groups of 60 students in each batch, running through the establishment provides the butter and jam on the daily bread easily.

Now that there is heavy competition, in finding students, there are two things that really matter. First metric is the pass percentage of students through the four year stint. The second measure is how many of the students coming out get absorbed by the industry. That is directly linked to how well the students score in their overall CGPA/GPA or whatever. So then, it is absolutely necessary that students move from year to year and obtain good marks too. The college administration needs to facilitate this process to keep the reputation of their college high and ensure full batches at admission time. That skews the meaning and implementation of the equation we started with.

Besides the 30 in internal evaluations, you need to score as much as possible in the possible 70 marks allocated for the semester final examination conducted by the university. These 30 marks assume significance in helping passing rates, as well as for getting high grades. It did not sink all in one day, but after I spent some five semesters in the system. Having said, “enough is enough” to my working days, I had decided to take it easy. A friend offered an opportunity to work at one of these new generation colleges. This had started in 2002 and by 2009 (when I came aboard) it had seen those smooth times. Hard times, scarcity of students started from the admission season immediately after I came in.

This was the peak time when a semester ends, and these internal marks are finalized and sent over to the University for compilation of results. Few days into the job, I received a call from the principal’s office, “you are needed to attend the meeting on internal evaluations in the principal’s office at 3 PM."  I expected this meeting was convened to moderate individual mark lists to avoid any significant variations across evaluators. One of my colleagues, trying to be helpful to this newbie, confided, “the first agenda would be to decide on a minimum marks to be given.

University requires that students attend a minimum of 75% of classes, in each course to qualify for the finals. Even in a residential college like this one, there are very few students who meet the criteria.  Normally, if you were to distribute 5 marks for the attendance from 0 to 100 percent attendance, you will need to set aside 1 mark for every 20 percent in attendance. Not so, one of my colleagues already updated me on the philosophy of it. He told me “no one can really be disqualified just for attendance! These colleges charge a lot of fees, and the paying customers will be unhappy, if they are disqualified and to pay for that semester again.” Thus, as most students need to qualify for the finals, everybody must be given a 4 or a 5.

There is a keeping up with Joneses’ angle always. All other similar colleges do it.  Students know exactly what the other colleges are doing. When results were declared, there are agitations by students when expectations are not met in final results for a semester. We, teachers are given example, by name, of colleges who sent out a 28 or 29 for all students. That can look quite awkward, and the purpose of this meeting was to give the marks list a semblance of realness. So the principal opined that, “about 10% students should get the 80% to 90%(defined, grade E) band and about 5% the 90% to 100% (grade O) band and similarly about 5% in the 40% to 50%(grade A) band.”

That leaves the class test and the class performance issues. There are two of these class tests taken. One test is conducted towards beginning of a semester and one towards the close of it. Each is for 15 marks. The final equation has 15 allocated to it. You need to decide how to take the two tests into account. “Let us take the better of the two” was the directive. Average of the two, I thought, would be a fairer evaluation. However, the reality on the ground dictated otherwise. The spread of marks, in the first class test, would be clustered around 0 to 5 with a smattering of students getting between 5 and 15. Students would not have warmed up enough by the first class test was the prevalent excuse. This happened despite liberal marking. Second class test, usually, were no different!

Assignments, when actually given, came back as if each student has made Xerox copies of some original. At times, you could detect two or more originals. Thus, in truth, these 10 marks were just the elastic parts to be stretched to fit the directives. Some amount of stretch was available in the class test category too.”What happens to the few good students who are there?" I asked one of the veterans who’s been in the system for five years. “Isn’t it unfair to those students who actually attend classes, get excellent marks in the tests and score well in the assignments?” They may score 24 to 30 yet do not get the benefit of the bonus marks. In real life, and I have seen this first hand, the worst disservice you can do to a good guy is not to recognize that.

The main argument for internal evaluations is that, given an objective evaluator; students are evaluated best by the teachers who directly interact with them. However, when the system gets skewed as I found, it loses all meaning. It does not help produce good engineers. A recent well known survey found that less than 25% of engineering graduates is even employable. I am not surprised! I am reminded of the consolidation phase of an industry. When a sunrise industry begins, a lot of players jump in. Soon, you arrive at an oversupply situation. Consolidation phase comes thereafter, when only the quality producers and service providers survive. Among other things, market dynamics will help improve the situation, I guess!