A child wearing headphones and holding a pen sits at a computer

Teaching About Technology in Schools Through Technoskeptical Inquiry

By Jacob Pleasants, Daniel G. Krutka, and T. Philip Nichols

New technologies are rapidly transforming our societies, our relationships, and our schools. Look no further than the intense — and often panicked — discourse around generative AI, the metaverse, and the creep of digital media into all facets of civic and social life. How are schools preparing students to think about and respond to these changes?

In various ways, students are taught how to use technologies in school. Most schools teach basic computing skills and many offer elective vocational-technical classes. But outside of occasional conversations around digital citizenship, students rarely wrestle with deeper questions about the effects of technologies on individuals and society.

Decades ago, Neil Postman (1995) argued for a different form of technology education focused on teaching students to critically examine technologies and their psychological and social effects. While Postman’s ideas have arguably never been more relevant, his suggestion to add technology education as a separate subject to a crowded curriculum gained little traction. Alternatively, we argue that technology education could be an interdisciplinary endeavor that occurs across core subject areas. Technology is already a part of English Language Arts (ELA), Science, and Social Studies instruction. What is missing is a coherent vision and common set of practices and principles that educators can use to align their efforts.

To provide a coherent vision, in our recent HER article, we propose “technoskepticism” as an organizing goal for teaching about technology. We define technoskepticism as a critical disposition and practice of investigating the complex relationships between technologies and societies. A technoskeptical person is not necessarily anti-technology, but rather one who deeply examines technological issues from multiple dimensions and perspectives akin to an art critic.

We created the Technoskepticism Iceberg as a framework to support teachers and students in conducting technological inquiries. The metaphor of an iceberg conveys how many important influences of technology lie beneath our conscious awareness. People often perceive technologies as tools (the “visible” layer of the iceberg), but technoskepticism requires that they be seen as parts of systems (with interactions that produce many unintended effects) and embedded with values about what is good and desirable (and for whom). The framework also identifies three dimensions of technology that students can examine. The technical dimension concerns the design and functions of a technology, including how it may work differently for different people. The psychosocial dimension addresses how technologies change our individual cognition and our larger societies. The political dimension considers who makes decisions concerning the terms, rules, or laws that govern technologies.

To illustrate these ideas, how might we use the Technoskeptical Iceberg to interrogate generative AI such as ChatGPT in the core subject areas?

A science/STEM classroom might focus on the technical dimension by investigating how generative AI works and demystifying its ostensibly “intelligent” capabilities. Students could then examine the infrastructures involved in AI systems, such as immense computing power and specialized hardware that in turn have profound environmental consequences. A teacher could ask students to use their values to weigh the costs and potential benefits of ChatGPT.

A social studies class could investigate the psychosocial dimension through the longer histories of informational technologies (e.g., the printing press, telegraph, internet, and now AI) to consider how they shifted people’s lives. They could also explore political questions about what rules or regulations governments should impose on informational systems that include people’s data and intellectual property.

In an ELA classroom, students might begin by investigating the psychosocial dimensions of reading and writing, and the values associated with different literacy practices. Students could consider how the concept of “authorship” shifts when one writes by hand, with word processing software, or using ChatGPT. Or how we are to engage with AI-generated essays, stories, and poetry differently than their human-produced counterparts. Such conversations would highlight how literary values are mediated by technological systems

Students who use technoskepticism to explore generative AI technologies should be better equipped to act as citizens seeking to advance just futures in and out of schools. Our questions are, what might it take to establish technoskepticism as an educational goal in schools? What support will educators need? And what might students teach us through technoskeptical inquiries?

References

Postman, N. (1995). The End of Education: Redefining the Value of School. Vintage Books.

About the Authors

Jacob Pleasants is an assistant professor of science education at the University of Oklahoma. Through his teaching and research, he works to humanize STEM education by helping students engage with issues at the intersection of STEM and society.

Daniel G. Krutka is a dachshund enthusiast, former high school social studies teacher, and associate professor of social studies education at the University of North Texas. His research concerns technology, democracy, and education, and he is the cofounder of the Civics of Technology project (www.civicsoftechnology.org).

T. Philip Nichols is an associate professor in the Department of Curriculum and Instruction at Baylor University. He studies the digitalization of public education and the ways science and technology condition the ways we practice, teach, and talk about literacy.

They are the authors of “What Relationships Do We Want with Technology? Toward Technoskepticism in Schools” in the Winter 2023 issue of Harvard Educational Review.