Thanks for this bookmark Su-Taun. As a guy who was born and raised in Calgary, I love cowboy analogies.
These two methors could also aptly describe the twi types of MOOCs, the so called c-MOOCs in which connectivist designs help nourish and grow knowledge and the XMOOCs where predetermined and predigested content is fed and then tested with LARGE numbers of learners.
But to be fair, the cattle leard analogy can also be applied to much behaviourist pedgaogies and canned courses delivered in print and other forms by open universities around the world.
It is great to see an extended metaphor in this landscape we live in of tweets, thanks very much for this. I am struck with the fact that many animals may not actually survive the drive, and that is thought of as OK, although society just deals with failures from the education system in the medical and prison systems.
So I like the cattle drive, but I'm not getting according to this metaphor, what makes the watering hole fundamentally different from a simple community, e.g. a technology user group?
I think 'education' in some way has to have evaluations. intuitively it seems true. and evaluation has always been part of every kind of education and training in traditional cultures. i am interested in whether the evaluation is fair, and even fun.
SuTuan thanks for posting this provocative paper. It pushes my buttons as well! I agree with Jon's post and would like to add that it is exactly this type of thinking (which exists within school administrations and boards) that constrains educational growth. Educational systems have to answer to tax-payers and assessment is the traditional criteria used. We have to learn how to assess differently, yet make the results palatable to the 'powers to be'. Standardized testing is the easy way out.
Absolutely! And indeed, thanks SuTuan for highlighting this one.
Standardized testing is the lazy way out but, like most lazy solutions, is much more effort in the end.
We do need to get away from being simultaneously producers and arbiters of education though, and one good thing to be said about such otherwise harmful practices is that they can make this easier to accomplish. Education is one of those rare trades where, when *we* do our job badly we get to tell the person who is paying us that it is their fault and, astonishingly, they tend to accept this on the whole. To add injury to insult, while they may often be the ones paying for it, they are not even really the customer (the primary customer is society as a whole). It is, however, still far better to use portfolios to achieve this separation, as long as there are good and accountable criteria for measuring competence, such as may be found in Athabasca University's PLAR process.
Thanks for the feedback, Dr. Dron and Susan!
Shavelson is one of the key scholars whose publications have been traced by me in order to pace up with current studies on formative assessment. The immediate response when I first read his comments on portfolio assessment was: “This is interesting!” I don’t often hear voices that do not support “portfolio as a tool for student assessment”. Thus I bookmarked here at Landing for further reference and to share with Landing-ers.
I have developed more than one e-portfolios for different courses and programs I took from academic institutions so that my instructors could assess my learning results. I learned a lot from developing e-portfolios. I agree with Dr. Dron that portfolio creation “has innate value as a learning experience and contributes greatly to the learning process” and restricted causal perspectives might be too narrow to be used for complex-learning such as PHD research. However, I also recognize the value of empirical evidences produced by accuracy-based approaches to assessment. Because, educational assessment wants to draw inferences about what students know, can do, or have accomplished more broadly basing on what they know and do observed (Mislevy & Riconscente, 2005). Inferences are hypotheses, and the validation of inferences is hypothesis testing that embraces all of the experimental, statistical, and philosophical means (Messick, 1989).
Over and above, I consider the validity of educational assessment a continuum with portfolio approach on the one end and criterion-based approach on the other. It’s a degree rather than an absolute value. Assessment is an integrated evaluative judgment. It refers to the degree to which empirical evidence and theoretical rationale support the adequacy and appropriateness of interpretations (Messick, 1989). Students performances need to be assessed via multiple ways (informal, formal) by multiple participants (instructor, peer and self) (Perkins, 1994). Therefore using any single way of assessment to build a claim about student performances could be insufficient and is an easy way out.
Assessment has being “a frustrating professional problem for the people involved.” (Patton, 1986, p. 11). I think none of portfolio or criterion-based standard assessment is an easy job. Differences in the teacher psychological belief (trait, behavioral, information processing, sociocultural …) and the purpose of assessment (to publish in league tables, to provide certificates, to support learning …) will lead to different practices of assessment (Black, Harrison, Lee, Marshall, & Wiliam, 2003).
Thanks for sharing your time and attention with me! Does my lengthy post make any sense to you?
Su-Tuan Lulee
References
Black, P., Harrison, C., Lee, C., Marshall, B., Wiliam, D., & Press, O. U. (2003). Assessment for Learning: Putting it into Practice (1st ed.). New York: Open University Press.
Messick, S. (1989). Validity. Educational measurement. In R. L. Linn (Ed.), Educational Measurement, The American Council on Education, Macmilan series on higher education (3rd ed., pp. 13-103). New York, NY: Macmilan Publishing Co., Inc.
Mislevy, R. J., & Richardson, M. M. (2005). Evidence-Centered Assessment Design: Layers, Structures, and Terminology ( No. 9). PADI Technical Report (p. 46). CA: SRI International. Retrieved from http://padi.sri.com/downloads/TR9_ECD.pdf
Patton, M. Q. (1986). Utilization-Focused Evaluation (2nd ed.). Sage Publications, Inc.
Perkins, D., & Blythe, T. (1994). Putting understanding up front. (cover story). Educational Leadership, 51(5), 4. doi:Article
Thanks, Terry, for the great tools for coding content!
Does this mean we don't need commercial solutions such as ATLAS.ti for not-so-complex content?
I am not sure but it seems that CAT can only code data by paragraph (the unit of analysis).
Su-Tuan
The Landing is a social site for Athabasca University staff, students and invited guests. It is a space where they can share, communicate and connect with anyone or everyone.
Unless you are logged in, you will only be able to see the fraction of posts on the site that have been made public. Right now you are not logged in.
If you have an Athabasca University login ID, use your standard username and password to access this site.
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.