Nice find! As a philosophy graduate with an ongoing interest in ethics I have much enjoyed the series myself. I believe that the writing team included professional philosophers to help generate and critique the plot, so there's a lot of academic depth hiding just beneath the entertaining and absurd surface. It's quite fun trying to spot the philosophers/schools of philosophy being called upon in each episode, as well as in the broader trajectory of the plot.
I enjoyed this, too, and I didn't make the connection with last week's work: thanks for the reminder. For those not wanting to watch (or wanting to know more about it) there's an overview of the special at https://en.wikipedia.org/wiki/Bo_Burnham:_Inside
When my kids came to have a look at my half-finished new office (formerly bedroom) a couple of weeks ago they observed that it looked very much indeed like Bo Burnham's room: a lot of cables, microphones, cameras, musical instruments, lighting, computers, monitors, a chair, a desk, and not much else. I felt quite pleased.
Thanks Matt, I have been looking for something to watch on Netflix.
This insight is very interesting, and it makes sense. Definitely agree that the impacts social media has on people heavily depend on who the users are and how the users use them.
For all my years in information technology, it seemed that the solutions we come up are supposed to make thinigs better, faster, easier, etc. The byproduct of such efficiencies are that workload could be reduced and make organizational positions redundant. Luckily, I have been in good companies, where people that lose their jobs to technological solutions are repurposed into other value-added activites for the company. I suspect that that is not always the case. When the question about whether IT is making things better than they were, this situation from earlier in the year came into my mind regaring Clearview AI facial recognition software. After watching this video, I was creeped out and felt like wearing a hood over anything I do in the public space (I will not really do this....). To think that any public imagine can be mined by this software and stored for easy retrieval within the app. If you consider every place where there is a potential image that could be billions of images that are structured (social media sites) and unstructured (street cameras). The easy access to my likeness is why I don't use the facial recogition feature on my iphone since I am fearful that sombody could reconstruct my imgaine and possibly even create a realistic image of me somewhere that I have never been.
This next website has a list of facial recognition apps and their main use cases. I found it interesting to review them and think about whether we are better off as a society with each of them or not. Which ones do you think are really an asset to society and which ones "creep" you out?
Gio
Both fascinating and horrific, Gio, thanks for sharing. Your iPhone only stores the data to log you in locally, like PIN and fingerprint data, and it's not your face as such, just a set of data points derived from it: in effect, much the same as a PIN. It's a piece of information that is only held in one physical place that, with luck, you are in complete control of. Apple go to great pains to try to prevent any possible access to it, even by determined professionals with access to the device. But, if they have that, then you have much bigger problems than facial recognition :-) The local secure storage is what makes it relatively secure, and it is why you need to set it up again independently on all your devices. Not *too* worrying! At least, not as worrying as passwords, the hash of which is stored on a server and thus, in principle, hackable even without physical access.
But those public face recognition systems certainly are very worrying indeed: interesting to reflect on how your behaviour might change if you know you are in a panopticon (note that behavioural change was exactly the point of Bentham's original dystopic invention), especially one in which the perceivers are incredibly fallible and prone to error. There are lots of counter-technologies, of course - e.g. see https://www.wired.co.uk/article/avoid-facial-recognition-software for a top-down overview with some examples. Knowing your enemy is important - these are not intelligent systems, in the sense of being human-like in their perceptions of you! And, like the iPhone (or equivalents in Android, Windoze, etc), not all are evil. It would be interesting to reflect on precisely what it is about the others that makes them more or less evil - I suspect it might help to get to the heart of understanding how social media (and computers in general) have changed the conversation about privacy, and rights of individuals to it.
Jon
Those face recognition systems can go very wrong...
Here are deepfake videos of Mark Zukerburg/ Obama. It can be extremely difficult for the general public to tell it’s fake ( I myself can't tell it's fake)
https://www.cnn.com/2019/06/11/tech/zuckerberg-deepfake/index.html
and even if some of the companies might not have bad intentions when collecting the data, their systems can be hacked and our data can be used on some evil things. Not only facial data, voice data can potentially also be used on restricting our voice to do some unethical/illegal activities.
I once got a call on my office number earlier this year, and it was strange that I felt the caller keep asking me weird questions ( like, he was asking me a bunch of questions like if I can confirm my email address, and wanted me to answer yes) I just felt wield so while on the phone I decided to google if this was some kind of scam and then I saw this. https://www.cbc.ca/news/canada/edmonton/can-you-hear-me-phone-scam-warning-bbb-1.3970312
Thank goodness I don't usually say the word "yes" but respond to questions with "yeah" ( which really frustrated the scammer that I wouldn't say the "yes" word ...).
However, if they can deep fake my voice, they can create the word “yes” themselves without me saying it. Here are some more articles on scammers potentially be using deep fakes voice technologies. A CEO in UK was scammed $243,000:
https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/#7fb95fe92241
https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
https://www.pandasecurity.com/mediacenter/news/deepfake-voice-fraud/
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.