I’ve been writing a series of columns on key directions our school is embarking upon to further integrate technology into the educational program, and into the operations of the school. In this column, my second in the series, I had intended to focus on what educational research tells us about the impact of technology on student learning. More specifically, I intended to share the results of research on what the advent of computers in schools and classrooms has on student outcomes, on various measures, including but not limited to, standardized test scores and other established measures of improved student performance.
Problem is, there isn’t very much to share.
As counterintuitive as it sounds, for all the interest and the extraordinary investment in bringing new technologies into schools, there is precious little evidence that children learn more, or better, or score higher, or get into better schools, or go on to more productive careers, or earn more money, or are happier, or are better people – any of the criteria usually used to measure educational effectiveness – because of the use of technology. And if you think I’m making that up, let me quickly refer to some more authoritative sources:
In a September 3 article in the New York Times, reporter Matt Ritchel describes a school district that has invested heavily in computers in classrooms and throughout the school system, only to see its own test scores stagnate, while the scores overall in the state, rose. Ritchel used this school as an example of what seems to be case in many, many other places throughout the country,
“But to many education experts, something is not adding up – here and across the country. In a nutshell: schools are spending billions on technology, even as they cut budgets and lay off teachers, with little proof that this approach is improving basic learning.
This conundrum calls into question one of the most significant contemporary educational movements. Advocates for giving schools a major technological upgrade – which include powerful educators, Silicon Valley titans and White House appointees – say digital devices let students learn at their own pace, teach skills needed in a modern economy and hold the attention of a generation weaned on gadgets.
Some backers of this idea say standardized tests, the most widely used measure of student performance, don’t capture the breadth of skills that computers can help develop. But they also concede that for now there is no better way to gauge the educational value of expensive technology investments.
“The data is pretty weak. It’s very difficult when we’re pressed to come up with convincing data,” said Tom Vander Ark, the former executive director for education at the Bill and Melinda Gates Foundation and an investor in educational technology companies.
And yet, in virtually the same breath, he said change of a historic magnitude is inevitably coming to classrooms this decade: “It’s one of the three or four biggest things happening in the world today.”
Critics counter that, absent clear proof, schools are being motivated by a blind faith in technology and an overemphasis on digital skills – like using PowerPoint and multimedia tools – at the expense of math, reading and writing fundamentals. They say the technology advocates have it backward when they press to upgrade first and ask questions later.”
And then, in October, in the next installment in the Times series Grading the Digital Classroom, there is a brief description of software (Cognitive Tutor) that is promoted as a “revolutionary math curricula with revolutionary results.” Yet:
“The federal review of Carnegie Learning’s flagship software, Cognitive Tutor, said the program had “no discernible effects” on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive Tutor, “did not have statistically significant effects on test scores.”
Amid a classroom-based software boom estimated at $2.2 billion a year, debate continues to rage over the effectiveness of technology on learning and how best to measure it. But it is hard to tell that from technology companies’ promotional materials.”
From a September 1 “spotlight section” in the respected national journal, Education Week:
“While there is much on-going research on new technologies and their effects on teaching and learning, there is little rigorous, large-scale data that makes for solid research, education experts say. The vast majority of the studies available are funded by the very companies and institutions that have created and promoted the technology, raising questions of the research’s validity and objectivity. In addition, the kinds of studies that produce meaningful data often take several years to complete-a timeline that lags far behind the fast pace of emerging and evolving technologies”
Finally, in a report issued in September, 2010 commissioned by the Federal Department of Education, with Secretary Arne Duncan’s name at the head of the list of authors, a meta-analytic approach was taken to measure the effects of on-line learning. The report notes:
“A systematic search of the research literature from 1996 through July 2008 identified more than a thousand empirical studies of online learning. Analysts screened these studies to find those that (a) contrasted an online to a face-to-face condition, (b) measured student learning outcomes, (c) used a rigorous research design, and (d) provided adequate information to calculate an effect size. As a result of this screening, 50 independent effects were identified that could be subjected to meta-analysis….”
It also notes some modest improvement for a blended approach in many settings:
“…An unexpected finding was the small number of rigorous published studies contrasting online and face-to-face learning conditions for K-12 students. In light of this small corpus, caution is required in generalizing to the K-12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).”
In sum, for all the investment, there appears to be little rigorous data – the sort that most parents would want their school to base their decisions, especially significant spending decisions upon – available to be considered when deciding about the how and what and why of new technologies in school. Admittedly, part of the challenge of developing meaningful, rigorously-vetted data is the very newness of computers themselves. For example, what is the likelihood that any longitudinal data could be established about the use of iPads in classrooms, when the product itself didn’t exist five years ago? In other words, one of the reasons we have so little evidence is that the rapidity of innovation is outpacing any attempts to assess the merits of the innovations.
While this, and a host of other factors may be true, I know for myself I wouldn’t want the inclusion of new technologies in medical treatment for any of my loved ones to be based upon hopes, wishes, bells and whistles, and one of the common criteria, “my cousin’s kids’ school in New Jersey just bought a bunch of these, and maybe we should, too (or some variant thereof).”
Simply put, a thoughtful approach to introducing new technologies in schools has to recognize the allure, and the potential for potential positive impact of technology, but yet still attempt to be rigorous in our assessments, and ultimately sober about how technology can further our mission. Simply put, we have been diligent about making sure that any insertion of new technology is all about meeting the mission of the school (and thus the learning of children) more effectively, and less about “winning the arms race,” as a colleague, a professor of sociology who specializes in assessment protocols for not-for-profits, recently commented.
Earlier this fall, I had the opportunity, with a very small group of independent school leaders to be the guest for a day at the Apple Computer Executive Training Center, where we were taken through the paces of all that iPads can do in education, and I was completely wowed. From Khan Academy to QWiki and a thousand other points of light, I felt as if I was privy to a glimpse of the future, and it was stunning. It seems to me that the task of school leadership is both to “embrace the wow,” while charting a course that exercises high standards of accountability throughout the process. I especially like how one of our consultants, who is helping us map out a strategic plan for technology, Dr. Steven Arnoff (of the Center for Leadership and Technology) put it, giving some excellent guidance to shape the next stages of our deliberations, “Like using any good tool, it is not about how much money you spend, but rather about revising the process to effect change.” Dr. Arnoff knows we’ll be spending a good deal of money in the coming years on technology, and he is wise to guide us in remembering that it won’t be the equipment or the software that will make the key difference, it’s the way in which we prepare ourselves, and then apply the technology.
Thus, without rigorously vetted data upon which to rely, a school like ours must look carefully first at our own experiences: what is the current state of technology at our school? What seems to work for us and what doesn’t? And what is the situation in schools we respect and may like to model? What is their experience for better and perhaps for worse, from which we can learn? And finally, can we identify gaps that – by addressing them – will help us do an even better job at achieving our vision of “… inspiring, celebrating and nurturing our students, in order to prepare them to become… self-confident, compassionate and practicing Jews and committed citizens, who are prepared to meet the academic and social challenges of the modern world…?”
In the following installments in this series, I reflect on how the school currently implements technology, and what we’ve learned from that use. The next columns describe the process we’ve employed to develop a “gap analysis,” and then finally, the steps we are considering, and proposing, to the Board of Trustees to enrich technology here at the school, while both employing rigor in our analyses, and holding true to our mission.
Arnold Zar-Kessler has been associated with Schechter since 1993, as Upper School Director. He was interim Head of School for the 2000-2001 school year, and then became permanent Head of School.?
Arnold has completed doctorate work at the Harvard Graduate School of Education, with a focus on how children learn the difference between scientific and religious knowledge. He has served as chair of the Association of Independent Schools in New England’s (AISNE) Membership Committee, the group designated to oversee accreditation processes for all member schools and to make recommendation to the Board on accepting or rejecting accreditation and re-accreditation applications, the first ever from a Jewish Day School. He currently serves on the board of AISNE and is leading their strategic planning process. He has also served as an Officer-at-Large for the Board of the Jewish Educators Assembly, the association for all Conservative Movement educators throughout North America.
He recently produced the well-received video for Schechter, featuring Abraham Joshua Heschel “Our Task”.