Three academics/professors writing on the importance of academic research. Why am I not surprised? Oh, and did I mention that all of them work at schools that have been bordering at the edges of rankings for ages, and have not been successful in making any major breakthroughs? It could also be a case of letting out their frustration, considering that USC Marshall alumnus Scott Miller recently caused an almost total overhaul of the Marshall curriculum, to take effect when classes resume in September, with support from the new dean Yash Gupta.
Anyways, instead of the old, tried & tested method of discrediting the sources, let us look at the claim entirely on its merit.
Harry & Linda DeAngelo of USC Marshall team up with Jerold Zimmerman of Simon Rochester to say that US business schools have entered a misguided competition for rankings and are diverting resources from long-term knowledge creation, which earned them their global standing in the first place, into quick-fix solutions like continuous syllabus changes, in order to look good in the rankings. Research, undergraduate and PhD programs suffer as a consequence and if this continues, the US business schools may find it hard to survive.
The authors assert that fifty years ago business education was irrelevant to most students, employers and society. Subsequent promotion of research led to betterment of business education. From 3200 masters' degrees in business awarded in 1955-56, the figure reached 135,000 (students enrolled in MBA programs) in 2001-02. Hmm...but hey, the MBA program is a 2 year program, right? So I would infer that about 67,000 degrees were awarded in 2001-02. Their argument still stands, but their intentions and integrity are certainly questionable now.
Coming back to the 3200 to 67000 jump, let's look at the circumstances of the time. The late 50's were a time when the business schools were really starting out. Business had never been known as an educational discipline, and then the so far manufacturing (and thus engineering) driven companies were just beginning to realize that managers need to have a more well-rounded understanding of business outside the confines of technical expertise.
Business education started as late as 1881, when the first school for business management in the US, the Wharton School, was established in Philadelphia. Dartmouth and the University of Wisconsin added business-administration departments in 1891, as did Harvard in 1908. Queen's University in Kingston, ON, started a School of Commerce in 1919; New York University began the practice of training business teachers in 1926. So how many graduates do you expect of this nascent phenomenon in 1955?
Like every product, MBA's too have a product life cycle, and the companies recruiting MBA's were few and far between at the time - not because the focus was on "soft skills" as the authors suggest, but simply because they were the early adopters of the product and the product hadn't begun to cross the chasm to an early majority yet.
Cut to 2001. More companies recruiting MBA's than ever. Not because of "hard skills" being taught, but because the product has reached the maturity stage and has a late majority following. Ford Foundation's critical reports of early 1960's expressed concerns about the wide inconsistency in quality of teaching at the MBA programs, not about "lack of research" or "focus on soft skills".
When I was at a school in India, there was this panel chaired by a university dean in which a captain of business commented on how useless "knowledge for the sake of knowledge" is, and that education should be more practice-focused. The dean, of course, took issue with that position and recounted the ways in which "knowledge for the sake of knowledge" is helpful and, in fact, crucial. This put the experienced, established, and celebrated captain of industry on the defensive. I thought both of them completely missed the boat on context.
Fundamental research vs. applied research has been a long standing dilemma that scientific laboratories have had to grapple with. To me, it is quite clear: both are important and there should be separate organizations devoted to both. While it is for Edward Lorenzes of the world to invent a chaos theory, it takes the unnamed thousands to use it for fractal graphics in computer games (or to predict earthquakes). In medical schools, they teach how to practise medicine, not do research on brain-mapping.
Yes, long-time knowledge creation is important. But that is not the purpose of a business school. Business schools have 2 markets, and 2 products: for the students, they have to get jobs; for the companies, they have to get suitable talent to manage them.
Why is creating new knowledge not the purpose of a business school, you ask? Well, the simple answer is that business or management, though regarded as an academic discipline, is not really a specialized domain. Managers do not have to be specialists in some magical craft called management. Instead, they have to have an understanding of various disciplines - economics, decision modeling/ statistics, finance, accounting, psychology/ sociology/ motivation/ organization behavior/ marketing/ negotiations/ communication/ leadership - that help them manage their businesses better. That's what recruiters want, and that's what the schools should teach. Knowledge creation should, thus, be done at schools of statistics or economics, and its business application taught at business schools.
Knowledge creation led to business schools becoming better, the authors claim. How many business school professors have won Nobels? Heck, how many business professors have created new knowledge taught at even business schools? What is the original contribution to the business world of Philip Kotler, the most celebrated author of marketing text books? Except for some obvious-even-to-the-dummies banter, that is. I remember the BCG & McKinsey matrices, and the McGraw 7Os, but no Kotler model. Nash, Marshall, Drucker, Peters, Hammer, Maslow, Taylor, Milgram, Zimbardo, - none of them was a b-school professor. Yeah, Porter and Prahlad are b-school professors, but frameworks like 5-forces can hardly be called "new knowledge"...they are just frameworks to structure thinking. Again, the b-school professors are not at fault. They are not creating new knowledge, and creation of new knowledge is not their job.
On page 5, the authors say that ranking changes are more statistical noise than news. In this area, they are actually talking about what is wrong with US b-school rankings, not about what is wrong with US b-schools; I may not agree even with their assessment of rankings, but I'll reserve those comments for another time and place. At the moment, all I want to say is that "statistical noise" sounds like an authoritative academic note, but the authors have not provided even a shred of evidence anywhere in the paper to ground their assertion that ranking changes are noise, not news.
On the same page, they use the example of Stanford's fluctuation to say rakings are flawed..."For example, during the dot-com boom, the graduating Stanford MBA class apparently alienated traditional corporate recruiters with their arrogance, causing a drop in Stanford's rankings. This ratings decline obviously was not indicative of a true decline in the educational quality of the Stanford MBA program, let alone a real erosion in Stanford's position as one of the best business schools overall" Seems to me that these academics have lopsided notions of quality. As I said before, the product of a business school is not the exact curriculum or the "education"; it is but the MBA's it produces. If the MBA's produced by a school are less valuable for the customers because of whatever reasons (including arrogance/ low EQ), then obviously the school becomes less valuable. What could be more straightforward than that?
I also find laughable their assertion that to capture meaningful changes in program quality, there should be a statistically detectable correlation among ranking changes. Rankings of different publications, expectedly, look at different aspects and assign different weights to the attributes. I'll oversimplify for the sake of illustration.
Let's say there are 3 schools - A, B and C.
Three publications - X, Y and Z.
X uses 3 criteria for publishing the rankings - H,I,J - and assigns them the weights of 50%, 40% and 10% respectively.
Y uses 4 criteria - I,J,K, L - with weights 20%, 20%, 30%, 30% respectively.
Z uses 3 criteria - H, J, L - with weights 10%, 30%, 60% respectively.
In Y1 scores (of 100 points) are:
H | I | J | K | L | Total(X) | Total(Y) | Total(Z) | |
A | 80 | 50 | 70 | 70 | 40 | 67 | 57 | 53 |
B | 70 | 60 | 50 | 80 | 60 | 64 | 64 | 58 |
C | 40 | 50 | 80 | 60 | 90 | 48 | 71 | 82 |
B is ranked 2nd by all of X, Y and Z.
C is ranked 3rd by X, and 1st by Y and Z.
Now, let's say B is able to "improve" its performance on L to 90 points by diverting resources from H, meaning that its performance on H falls to 30
H | I | J | K | L | Total(X) | Total(Y) | Total(Z) | |
A | 80 | 50 | 70 | 70 | 40 | 67 | 57 | 53 |
B | 30 | 60 | 50 | 80 | 90 | 44 | 73 | 72 |
C | 40 | 50 | 80 | 60 | 90 | 48 | 71 | 82 |
From page 7-12, the authors focus on the horizon-problem for deans of business schools. They say that since the deans' contracts are typically 5-year ones, they focus on present benefits and can indulge in false, or creative reporting etc. This sounds convincing at first, especially in the shadows of Enron et al. However, closer examination will dissipate any aspersions. Though they work on 5-year contracts, not many b-school deans among the top business schools work for less than 15 years or so....essentially they stick till retirement. Besides, even 5 years is a long enough period in the fast paced American business to expose the "true state of affairs" and affect demand.
The authors seem quite stressed by (page 8):1 constant upheaval, "whose turmoil impedes student learning and faculty knowledge creation"2 "distortions" in MBA curriculum3 pressures to trade off instructional quality in undergraduate and in other graduate programs (go on, say it...say PhD) for the full-time MBA program4 adverse changes in the student composition of MBA and other programs5 temptations to manipulate the data schools supply to media6 a reduced emphasis on PhD education
With regards to points 1 and 2, I'd just want to ask these "ivory tower academics" to go out and have a look at how Lever or P&G continuously research customers and improve their products to reflect those learnings.
3, and 6 are bogus points. Why should business schools care for the undergraduate of PhD programs at all? PhD programs are solely there because of the rankings. Undergraduate programs are created in response to the excessive demand - these are flankers created to capture the overflow benefits of the flagship product - the MBA. Because the MBA is great, there are many undergraduates who want to come study at the school. What do you expect then? Should the school divert resources from MBA to the undergraduates? Preposterous.
Regarding point 4, authors say (page 11-12), "Sometimes these concerns lead schools to systematically exclude students with high GMATs whose personal characteristics make them difficult to place. Other times they lead them to reject students with marginally lower GMATs whose other attributes would improve the collective learning environment". Supply and demand. Obviously the schools want to take students that are easier to place. If a batch of Coca-Cola is too sweet, Quality Assurance will throw it out as rejected lot. Ditto for a lot with way too much fizz. Conformance quality maintenance. Supply and demand.
Point 5 - If the concern is that unethical and "creative" reporting methods (page 12) are being used in some places, which seems quite doubtful at least among the top 50 or so schools, then the solution is to force the schools to report correct numbers a la SEC reporting, not to eliminate rankings.
On page 10, the authors say quote Pfeffer and Fong who have noted "disturbing" trend toward handing out Power Point slides to save students from taking notes - "“Students now routinely expect summaries of course readings and materials. For instance, at Stanford and many other business schools, it is now customary to pass out copies of overheads at the end of each class session summarizing the main points and ideas of the class, in response to student demands for “structure” and “take-aways.” The problem is that when students are relieved of any sense of responsibility for their learning and much involvement in the learning process, the evidence is that they learn much less.”"
This was personally very funny to me, because when I went to my first MBA class, I had planned to a speech-to-text converter program to take notes from the class. I hadn't bought a microphone by then and the professor distributed copies of the slides before the lecture, so that plan never took off. But these oldies took me back a year. I can understand their disgust at distribution of PowerPoints though. They took notes when they were students...that is how students are supposed to behave....how dare the dean make them distribute notes! I have hardly ever taken notes in class...I just find it distracting. My mind is a terribly single-processing machine, and I am able to assimilate, analyze and think about the topic under discussion if I don't have to worry about taking notes.
But it's really a generation gap that causes the trouble. Some time ago, teachers were strictly against taking notes - knowledge transfer was oral. Socrates worried that relying on written texts, rather than the oral tradition, would “create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” He also objected that a written version of a speech was no substitute for the ability to interrogate the speaker, since, when questioned, the text “always gives one unvarying answer”. So, we'll just leave this objection of the authors as neophobia.
And, of course, you wouldn't want me to comment on the authors' assertion that outside-class activities, or "ancillary activities" are bad for business school quality and impose heavy costs on MBA students in terms of time. Just reminds me of Mark Twain, who, when asked about the reason for his spectacular success, said, "I've never let my school interfere with my education."
Although, I do not feel a real need for PhD program at business schools, I feel an urge to refute a very lame assertion the authors make on pages 13-14. "Superior foreign-born PhDs are not interested in US faculty positions simply to teach - they can do that just as well, and probably more comfortably, in their home countries. They seek employment at US business school for their research environments, which are currently superior to those of foreign business schools."
What should I say to that? Firstly, I guess one reason for foreign PhD's being interested in US faculty positions is the money - I haven't come across many US b-school professors from the Scandinavian countries. Then, the "research" the authors mention is not research in genetic engineering or nanotechnology. A market research study or referencing of 20 books can be done as easily in any part of the world.
The authors have written a whole section "Research competence does not mean teaching incompetence" (page 17-19) defending research. The fact is, most schools that have "cutting-edge research" faculty have low student access to those faculty. I'm sure you have heard horror stories about how the faculty are engrossed in research and don't even teach classes - relegating the lowly teaching work to teaching assistants. Speak of TA's, let me tell you about something a professor I was TA'ing for told me. He said that some department heads can actually become upset if your students give you very high course evaluations, because that would mean in all probability you are diverting time from research to teaching. Go figure!
Finally, in case you have any illusions about business schools elsewhere being any "better", this should cure you: Gary Hamel (of "core competence" fame), who is a part-time visiting professor at London Business School, where he is preparing to launch the world's first university-based "Management Innovation Lab", was quoted by the BusinessWeek as saying "I'm not sure academia is the best place for management study. There's too little priority to connect minds in business schools with companies... [I chose London Business School because] it's the best business school outside of the U.S...[In the U.S., faculty members are] more absorbed in papers."
I'll drink (Pomgreat) to that.
P.S. -
1. I do believe that the rankings are somewhat arbitrary, and the huge differences in rankings of some schools from publication to publication and from year to year do nothing to boost my confidence in them. However, I also believe that the schools that manage some consistency across publications and an upwards or stable pattern across years do tend to be the real thing. Any ranking attempt of anything involving multiple attributes will involve a subjective element, and that's fair enough. Schools consistently ranked 1-5 will usually be comparable, as would those ranked 6-10, or those ranked 11-20.
2. Prof. Henry Mintzberg of McGill says in his "Managers Not MBAs: A Hard Look at the Soft Practice of Managing and Management Development" that "[MBA students'] classes are focused on analysis and technique instead of clinical experience."
No comments:
Post a Comment