Unplugging Is Not The Solution You Want

The more we preach “better habits,” and the less we adopt them, the worse off we feel. Starting the day checking email in bed somehow feels even worse when the world is telling we’re a loser for doing so. Judgement compounds our predicament.

Other people’s feelings do matter, but not so much that we each shouldn’t determine and advocate for our own way of living. Nobody is that selfless nor should they aspire to be.

2 Likes

I’m not sure how I feel about this article. Some aspects I agree with, others I don’t. I will approach this same subject in a very different way…

Idea #1: The intention behind the creation of a tool is not necessarily what it will be used for

Generally, the tools needed for humans to communicate with one another seem to develop wherever people congregate, so nearly everything that we might consider “social media” existed long before the creation of home computers or The World Wide Web. To give an interesting historical example…

The first computer to be built and owned by a U.S. educational institution was the ILLIAC I at University of Illinois in 1952. By 1960, it was being used to run the first computer-assisted instruction project, Programmed Logic for Automatic Teaching Operations (or PLATO, for short). Throughout the early-to-mid 1970s, students made many programs for it that could be interpreted as forerunners of “social media”:

  • Multiuser Games (for example, Moonwar created by Louis Bloomfield in 1972)
  • Message Boards (Public Notes created by Dave Woolley in 1973)
  • Instant Messaging (TERM-talk created by Doug Brown in 1973)
  • Chatrooms (Talkomatic created by Dave Woolley in 1973)
  • Electronic Mail (Personal Notes created by Kim Mast in 1974)
    …etc.

The ways in which people communicate through this digital medium also started to take shape, such as the use of Emoticons.

In short, what was originally intended for top-down education became a means of conversing, playing games, and making art together. This leads us to an important question…

Question #1: If you make something and share it, to what extent are you responsible for how others use it?

Idea #2: A tool can be structured in such a way that influences how it will be used

In context of the above, I do not see modern “social media platforms” (like X, Facebook, and so on) as something “necessary” for communication. If anything, I would say they are a detriment to real dialogue and connection if those same companies hire people to purposely engineer those “social media platforms” to be addictive, deceptive, and/or catering towards extremes.

When it comes to programming, the distinction between “free software” (i.e.: protective of the rights or “freedoms” of the individuals and communities that use it) and “open source” (i.e.: at zero cost or with “open” availability of source code) is important. It helps us to understand when “the program controls the users” instead of “the users controlling the program”.

While I can definitely empathize with the idea that “people need to eat”, at what point are we willing to take responsibility for our own actions? Just because someone has a job or owns a business does not necessarily mean that the things that they are doing are contributing to everyone’s quality of life. In more extreme situations, the opinions of a tyrant can only shape society when they are empowered by the force of an unthinking mob. This leads us to another important question…

Question #2: Is it possible to find a balance?

There is an interesting tension that exists between the interaction of ideas 1 and 2, one that has repeatedly played out in different ways throughout the history of computing (and throughout history in general). Another example…

Phil Zimmerman was a peace activist. In 1991, he freely released Pretty Good Privacy (PGP), a program that made it relatively easy for anyone to encrypt data. It was intended to protect the communications of activists from corrupt governments, but was later used as the foundation of e-commerce, enabling the first online credit card transaction in 1994 on a website known as NetMarket.

Similarly, some use “darknets” (like Tor, I2P, etc.) for carrying out various “illicit trades” (e.g.: trafficking of humans, weapons, drugs, etc.), while others use them for whistleblowing and keeping important information from being suppressed.

Technology often amplifies the intentions of the people using it, reflecting more general human problems (like an insatiable greed and lust for power, justifying violence towards others by dehumanizing or objectifying them, etc.).

It would seem that we can only be responsible for ourselves and encourage personal accountability whenever we can. Therefore…

Question #3: What alternatives are we applying our skills and efforts toward?

The inertia of “business as usual” is currently exacerbating complete civilizational and ecological collapse. Technology is only useful to the extent that it helps to mitigate those tendencies.

For example, we need ways where we can transition out of accepting “work” that is harmful, coupled to a system that provides enough information and support to become as self-sufficient as possible when it comes to our basic needs for survival. Can these things be accomplished through a cooperatively-owned crowdfunding platform connected to a peer-to-peer network that shares the knowledge generated by sustainable makerspaces/fab labs within real-world communities?

Just some stuff to think about (and act upon if you feel so inclined) :slightly_smiling_face:

1 Like

I’m grateful this post exists because as a result we’re having a great discussion and that’s all I care about. Reading your thoughts on the subject is very interesting.

I remember reading this article in the Baffler about this topic years ago. It’s pretty provocative (the Unabomber turns up in the fourth paragraph!) but there are some interesting points.

I don’t disagree with anything in that article.

1 Like