Landing : Athabascau University

Week 6 Reflections

Getting uncomfortable

The most recent example of social media making me uncomfortable was learning that a video of my son was uploaded to TikTok without his knowledge or consent.  My oldest son, who was 8 at the time this happened (he is 10 now), is just coming to the age where social media and social networking is becoming increasingly prevalent.  This incident occurred at during the summer of 2020, when the majority of his friends were his age (8 – 10).  As a quick aside, I was (and still am) completely floored by parents that allow children this young with unbridled access to the internet and social media, let alone have a phone with a data plan accessible at all times.  Call me old fashioned, but I think this is far too young for a child to be online and engaging on these platforms without supervision …. But maybe that’s just me.  So one day my Son was playing at the park with his friends.  Another group of kids that were in his grade were there as well and apparently video tapped him unbeknownst to Tyler.  Later that day, someone else in Tyler’s class saw that the girl that video tapped Tyler posted the video to TikTok saying “Tyler is Dumb” or “Tyler is stupid”, or something childish like that.  Now, I nor Tyler even have a TikTok account and the only reason we found out about it, was that one of the kids that saw this video came to tell us about it.  While there was no real damage to this incident, the whole thing left a bad taste in my mouth and left me with questions around privacy, safety and content control for these platforms.

What Can be Done?

For starters, when it comes to TikTok, improved parental controls are required.  At the time, this incident happened there were very limited Parental Controls, however, this has since been addressed where parents can now “… restrict who can comment on their teen’s videos, who can view their account, and who can see what videos they’ve liked”. [1]

For most social media sites, the age restrictions on creating an account are 13 in accordance with the Children’s Online Privacy Protection Act (COPPA).  Additionally, there are usually features on social media platforms to report underaged accounts.  That being said, it is obviously hard to manage this and these companies aren’t necessarily inclined to invest a lot of resources in routing out underage accounts as this goes directly against their bottom line.  I believe the issue is one of policy.  There needs to be greater reform and consequences around the minimum age restrictions for young children using social media sites that are not designed for their specific age group. This would incentivise companies to dedicate the time and money required to enable technical solutions on their platforms to help adhere to these policies.

 

[1] Kastrenakes, J. (2020, Nov). TikTok now lets parents make their teens’ accounts more private. theverge.com . https://www.theverge.com/2020/11/17/21570244/tiktok-parental-controls-family-pairing-private-accounts-search-limits

 

 Social Media Management

Imagine that you are the steward of a simple threaded discussion forum. The discussion has become toxic - there are obvious instances of personal defamation and some quite upset users (some of whom are mailing you urgently asking you to stop), though nothing yet illegal. It is on a sensitive topic where opinions and beliefs vary considerably - say, gender, politics, or religion, perhaps. You have tools to shut down the whole forum, to cut off the thread, to talk to users individually, to disable user accounts, and so on, but nothing that would not be found in any simple forum software. You do not have recourse to robots! What should you do? What are your options? What are the risks?

Following on from this, you want to prevent it from happening again. You can hire a programmer to make any changes you like to the forum and the server it runs on. You can also do anything else, within reason, from shutting the whole thing down to establishing guidelines for behaviour. What should you do? What are your options? What are the risks?

 

To answer this question, I’m going to make some basic assumptions about the scenario:

1 – the discussion form is a loosely formed community with no governance framework or policies;

2 – I am the moderator or an admin of the group; and

3 – The forum takes place on a platform that doesn’t have any acceptable use policy or terms & conditions.

With these based assumptions in place, dealing with the immediate issue at hand becomes tricky.  Any outward intervention will likely be seen as unfavorable by proponents of at least one side of the argument/debate.  Terminating the thread entirely could result in alienating your user base and detracting from what is normally a healthy, engaging online community.  The fundamental question of what should you do is a question of ethics.  Objectively the right thing to do is the what is morally right, the nuance here is that what is morally right for me will not be viewed as morally right for everyone wrapped up in this issue.  Ultimately, the fair course of action is to provide an unbiased warning to everyone on the thread that vulgarity and defamation will not be tolerated and any future instances will result in temporary suspension of user accounts (or something to that effect).

To prevent this from happening again, the solution is clear.  At the very least the forum needs to adopt basic terms and conditions with an Acceptable Use Policy, clearly outlining what conduct is and is not acceptable and recourses should these terms be violated.  The establishment of a Community Charter and Guidelines of does and don’ts could also help to inform and establish community behaviour.

 

 Effects of design of social media (algorithms, structures, business practices, etc)

For my discussion this week, I’m going to focus on the most recent allegations against Facebook wrapped up in what’s being dubbed: The Facebook Papers.  The Coles notes of the issue are a former Facebook employee, turned whistleblower, leaked multiple internal company documents revealing gross incompetence and malpractice that contradicts statements the company has pervious made a combatting hate speech, misinformation and cultural division.

A good overview of the situation can be found here: https://www.npr.org/2021/10/25/1049015366/the-facebook-papers-what-you-need-to-know

An overview of the main revelation from the papers can be found here: https://arstechnica.com/tech-policy/2021/10/four-revelations-from-the-facebook-papers/

The Arstechnica story highlights four main revelations:

  1. Facebooks hate-speech moderation is ineffective in other languages.  Following the 2017 accusation of enabling genocide in Myanmar, Facebook invested 87% of its budget dedicated to combating mis-information and hate speech to the United States and only 13% to the rest of the world.
  2. Facebook noted severe biases of political content to Males based on their race, but did not understand why nor dedicate appropriate resources to understand why.
  3. Facebooks reliance on AI has made it harder to actually report hate-speech – a reduced budget on human based monitors and increased spending on AI technology cited as contributing factors
  4. Facebook was slow to act on the spread of harmful content during the lead up and conduct of the attacks on the US Capitol.

While these papers do not tell the full story of all the inner workings of Facebook, nor the deep context to what is written in the documents (and potentially inferred by external parties), there is enough aggregate information there is understand how Facebook is at best negligent to at worst, morally and ethically corrupt.

Issue like the ones mentioned above, or at least the themes of them (indifference, ineptitude) are not uncommon in a lot of organizations.  However, when you’re a global platform that has the ability to impact religion, politics, culture and foreign policy on a global scale, being complacent on matters such as hate-speech, misinformation can only be viewed as grossly negligent and morally void.  The basic ethical principles of nonmaleficence and beneficence should inform all of their policy and technology decisions, but that clearly has not been the case here.