Landing : Athabascau University

Five Things We Need to Know About Technological Change, by Neil Postman

https://web.cs.ucdavis.edu/~rogaway/classes/188/materials/postman.pdf

I mentioned this in a reply I recently made to a post by Gio (and comments by Jenny), but it occurs to me that it deserves a more prominent place in a course on social computing so here it is. I consider this very short talk given by Postman to be one of the most succinct, powerful, and insightful papers ever written on the subject of technology and, given that it was delivered in 1998 (so the technologies he emphasizes are mostly not those we care about so much now) it has some remarkably prescient things to say about where we are now in the realms of social media. He wrote a lot of books and papers that delved deeper into these and others of his ideas that are well worth reading, but this is a brilliant summary. In brief, the 'five things' referred to in the title are:

  1. that culture always pays a price for technology.
  2. that there are always winners and losers in technological change.
  3. that every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.
  4. that technological change is not additive; it is ecological.
  5. that media tend to become mythic.

 

Read the piece for explanations of what he means. There's a lot of this that is drawn from Marshall McLuhan, but Postman's distinctive take on it clarifies and amplifies McLuhan's messages. There's barely a wasted word in the whole thing. It deserves close reading.

There's an HTML version at https://www.student.cs.uwaterloo.ca/~cs492/papers/neil-postman--five-things.html

Comments

  • Jenny Chun Chi Lien September 25, 2020 - 2:32pm

     

    Thank you professor Jon for sharing.

    I think we will constantly have to fight with these challenges.

    I would think I myself who works in the IT firm, would be one of those getting the advantage from technologies, while many others might be harmed or suffer from it.

    Being a person working as a Data Scientist, this question constantly pops up for me. Is my job ethical? My first Data Scientist job was to reduce the workload of my coworkers. My position’s goal was to reduce the manual analysis process and people are losing their jobs because the models now automatically classify events. The model was trained on the data that was processed by these analysts, and the data they have created throughout the years have helped the models to eliminate their own jobs. For many of us working in IT, our jobs are to replace other people’s jobs with automation...

    Previously, the companies I worked for care about social aspects of things ( or at least they said so) so at least I feel I was working for companies doing good things, so I was doing something good – even though a huge part of my job were to automate other people’s job, but at least the companies were doing good thing,... right?

    Am I making the world a better place? The truth is technology has widened the gap in between the rich and the poor. The people in the tech firms are getting richer and richer,.I get paid higher than friends my age, who are smart and hard working, but are not in the tech firms.

    I enjoy a lot of aspects of my job- especially because I get to do things that no one else in the companies I worked for have done. I like the feeling that I am learning so many things everyday. Also I have a stable career that I don’t have to worry about money.

    However technology should have a main goal of making humans’ lives better and making the world a better place. Sometimes I question if my job is meaningful. That’s part of the reasons I am taking on two volunteering positions right now- it reduces my guilt of being in the tech firm being one of the privileged people benefiting and getting richer just by staying in the tech field and automating other people’s jobs.

    There’s one project I am currently working on as a volunteer with my fellows in Europe, which is to hopefully trying to utilizing technology to fix human/social problem – that we were hoping to increase people’s well-beings by getting them into fitness such as running, yoga, as well as promoting climate change awareness. With this project I do analyze people’s data, and trying to understand people’s behaviour, and trying to shape people’s behaviour, encouraging users giving behaviour (donating, volunteering..etc), just like what these social medias are doing – changing people’s behaviour, but I am hoping with the knowledge I have, my skill is used towards something good, and is something I hope I can be proud of.

    Honestly, even doing “good things” makes me feel I am privileged. Working in tech firm I get paid well enough that now I don’t have to worry about money as much as many others and I get to be the “good guy” doing “good things” and be proud of it – not because I am a better person than many others I know but because I am lucky enough to have extra capacity to think about all these other issues.



  • Giovanni Tricarico October 8, 2020 - 11:39am

    For all my years in information technology, it seemed that the solutions we come up are supposed to make thinigs better, faster, easier, etc.  The byproduct of such efficiencies are that workload could be reduced and make organizational positions redundant.  Luckily, I have been in good companies, where people that lose their jobs to technological solutions are repurposed into other value-added activites for the company.  I suspect that that is not always the case.  When the question about whether IT is making things better than they were, this situation from earlier in the year came into my mind regaring Clearview AI facial recognition software.  After watching this video, I was creeped out and felt like wearing a hood over anything I do in the public space (I will not really do this....).  To think that any public imagine can be mined by this software and stored for easy retrieval within the app.  If you consider every place where there is a potential image that could be billions of images that are structured (social media sites) and unstructured (street cameras).  The easy access to my likeness is why I don't use the facial recogition feature on my iphone since I am fearful that sombody could reconstruct my imgaine and possibly even create a realistic image of me somewhere that I have never been.  

    https://www.cnn.com/videos/business/2020/02/12/facial-recognition-clearview-ai-shorter-orig.cnn-business/video/playlists/business-artificial-intelligence/

    This next website has a list of facial recognition apps and their main use cases.  I found it interesting to review them and think about whether we are better off as a society with each of them or not.  Which ones do you think are really an asset to society and which ones "creep" you out?

    Gio

  • Jon Dron October 8, 2020 - 12:30pm

    Both fascinating and horrific, Gio, thanks for sharing. Your iPhone only stores the data to log you in locally, like PIN and fingerprint data, and it's not your face as such, just a set of data points derived from it: in effect, much the same as a PIN. It's a piece of information that is only held in one physical place that, with luck, you are in complete control of. Apple go to great pains to try to prevent any possible access to it, even by determined professionals with access to the device. But, if they have that, then you have much bigger problems than facial recognition :-) The local secure storage is what makes it relatively secure, and it is why you need to set it up again independently on all your devices. Not *too* worrying! At least, not as worrying as passwords, the hash of which is stored on a server and thus, in principle, hackable even without physical access.

    But those public face recognition systems certainly are very worrying indeed: interesting to reflect on how your behaviour might change if you know you are in a panopticon (note that behavioural change was exactly the point of Bentham's original dystopic invention), especially one in which the perceivers are incredibly fallible and prone to error. There are lots of counter-technologies, of course - e.g. see https://www.wired.co.uk/article/avoid-facial-recognition-software for a top-down overview with some examples. Knowing your enemy is important - these are not intelligent systems, in the sense of being human-like in their perceptions of you! And, like the iPhone (or equivalents in Android, Windoze, etc), not all are evil. It would be interesting to reflect on precisely what it is about the others that makes them more or less evil - I suspect it might help to get to the heart of understanding how social media (and computers in general) have changed the conversation about privacy, and rights of individuals to it.

    Jon

  • Jenny Chun Chi Lien October 8, 2020 - 12:58pm

    Those face recognition systems can go very wrong...

    Here are deepfake videos of Mark Zukerburg/ Obama. It can be extremely difficult for the general public to tell it’s fake ( I myself can't tell it's fake)

    https://www.cnn.com/2019/06/11/tech/zuckerberg-deepfake/index.html

    and even if some of the companies might not have bad intentions when collecting the data, their systems can be hacked and our data can be used on some evil things. Not only facial data, voice data can potentially also be used on restricting our voice to do some unethical/illegal activities.

    I once got a call on my office number earlier this year, and it was strange that I felt the caller keep asking me weird questions ( like, he was asking me a bunch of questions like if I can confirm my email address, and wanted me to answer yes) I just felt wield so while on the phone I decided to google if this was some kind of scam and then I saw this. https://www.cbc.ca/news/canada/edmonton/can-you-hear-me-phone-scam-warning-bbb-1.3970312

    Thank goodness I don't usually say the word "yes" but respond to questions with "yeah" ( which really frustrated the scammer that I wouldn't say the "yes" word ...).

    However, if they can deep fake my voice, they can create the word “yes” themselves without me saying it. Here are some more articles on scammers potentially be using deep fakes voice technologies. A CEO in UK was scammed $243,000:

    https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/#7fb95fe92241

    https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

    https://www.pandasecurity.com/mediacenter/news/deepfake-voice-fraud/