Critical pedagogy for social media


A side profile of a woman in a russet-colored turtleneck and white bag. She looks up with her eyes closed.

“The classroom remains the most radical space of possibility in the academy”

― bell hooks

The above quote by bell hooks from her book “Teaching to transgress” exalts us to expand our vision of the classroom. hooks provokes us to move beyond passive instructional pedagogy and imagine learning communities that are collaborative, and nurture dialogue and critical thinking.

Reframing this in the context of the current project, the classroom offers radical possibilities of collaborating with “digital natives” (Prensky, 2011) to uncover the socio-historical context of existing digital landscapes and unpack how power is perpetuated, exacerbated, and mitigated by information systems. It also offers us unique possibilities of crafting digital futures that cultivate meaningful relationships, and fostering social change.  

Technical systems, akin to social and legal codes, are entrenched in the inequalities of power plaguing our societies. Within popular discourse, replacing human judgment with AI based decision making is wrongly considered a viable solution to address issues of biases in institutions such as the criminal justice system. These assumptions stem from beliefs about Big Data as being “unbiased”, “objective”, and “theory- free”. Wrong assumptions about the implicit neutrality of digital architecture are harmful as they can lead to overconfidence in exactitude, underestimation of risks, and minimization of epistemological issues

In her seminal book “Race after Technology: Abolitionist tools for a New Jim Code”, Ruha Benjamin (2019) argues that racism and other forms of discrimination are embedded in digital architectures. Across diverse sectors such as health care, criminal justice, and finance, researchers have demonstrated how “unbiased” algorithms systematically discriminate against people of color. Obermeyer (2019) and his colleagues demonstrated how a widely used algorithm in healthcare was more likely to flag white patients for extra medical attention than blacks who were just as sick.

Algorithms not only betray the biased assumptions of individuals and institutions who create them but also of society as a whole. All AI based decision making tools need to be “trained” based on existing datasets. By default, their results will continue to perpetuate the discrepancies in the original datasets themselves.

There is a need to critically examine the technical choices underlying digital infrastructure. These choices determine the nature, purpose, and outcome of digital applications. They also offer important clues to how structural problems in society extend to digital contexts.

Risks and affordances enabled by the rise of social media

Digital technologies act as neural pathways supporting mobilizing efforts both online and offline. On one hand this has enabled mass mobilization across the globe around issues of racism and gender discrimination. The Black Lives Matters movement is a case in point. Social media enabled the mobilization of mass protests against racial violence across the US and the globe. According to some estimates, the murder of George Floyd witnessed the highest number of anti racist protests in the history of the US (Buchanan et. al, 2020).

Social media platforms, unfortunately, have also created opportunities for mobilization around extremist ideologies and a re-emergence for the alt- right across the globe. According to Jessie Daniels (2018), “The rise of the alt-right is both a continuation
of a centuries-old dimension of racism in the U.S. and part of an emerging media ecosystem powered by algorithms”. The alt- right have been early adopters of social media and is heavily invested in community building online and recruiting White working class youth into its fold. When such users were banned from mainstream social media sites for spreading false information and violent content, they created their own parallel digital forums such as Gab. By exploiting the affordability of emerging technologies , the alt- right has been able to expand the boundaries of the acceptable ideas in public discourse, also termed as the “Overton window”. Discussing the implications of these shifts, and the resultant rise in intolerance and hate, dehumanization of the other, and our own vulnerable to false information are important social justice issues of our times.

By their ability to customize content visible to us on the internet, algorithms have the ability to create echo chambers and reinforce our existing beliefs. This results in increased polarization of views and aids in the spread of misinformation. It also masks the covert operations of machineries that spread intolerance and hate. Most people are unaware that platforms like GAB with over 1.1 million followers exist.

As educators it is paramount that we provide our students with the necessary intellectual and digital tools to dissect and confront these unjust technological infrastructures. In the words of Paulo Freire (1972), “There is no such thing as neutral education. Education either functions as an instrument to bring about conformity or freedom.”

References

Balazka, D., & Rodighiero, D. (2020). Big data and the little big bang: an epistemological (R) evolution. Frontiers in big Data3, 31.

Buchanan, L., Bui, Q., & Patel, J. K. (2020). Black Lives Matter may be the largest movement in US history. The New York Times3.

Daniels, J. (2018). The algorithmic rise of the “alt-right”. Contexts17(1), 60-65.

Freire, P. (1972). Pedagogy of the oppressed. New York: Herder and Herder.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science (New York, N.Y.)366(6464), 447–453.

Leave a comment

Your email address will not be published. Required fields are marked *