Twitter: A Love-Hate Relationship

There is no question that modern life carries burdens incompatible with the brains we have evolved … “In a species that’s designed to live in groups of fifty to seventy, living in a group of several billion is just hard on everyone.”
—Andrew Solomon quoting Randolph Nesse in “A Noonday Demon: An Atlas of Depression”

I have an aversion to social media. I have never had a Facebook account and I never will. I do not use Instagram, WhatsApp, Reddit, TikTok, Snapchat, or Pinterest. I do not understand why people write for platforms like Medium, when setting up a personal blog is so easy. I have used LinkedIn as a recruiting tool for work, but I detest LinkedIn as a promiscuous and unfocused platform. Twitter, however, is a different story.

I was originally interested in Twitter because of the technology, long before I created an account. As I will elaborate below, I found Twitter’s real-time, full-text search revolutionary for finding niche information. Furthermore, I was interested in Twitter’s infrastructure for scalable, real-time publish-subscribe messaging and how it might influence my work building somewhat similar systems for industrial automation and IoT.

I didn’t create a Twitter account until 2014. I was pretty late to the party. I used my account for two things: following the work of people in the software industry and sharing my writing on my blog. Initially, Twitter was great. Very few people followed me. I was never tempted to check notifications, because I never had any. I followed a focused set of accounts and I discovered many new ideas to apply to my work. It made me a better software developer and a better writer.

Over time, as I discovered more people with expertise in the topics that interested me, I followed more accounts. I also made a conscious effort to follow people from a diverse set of viewpoints. While there were many benefits, following more accounts made Twitter less curated and it became harder and harder to find the signal in the noise. When I initially shared my writing on my blog, no one took notice, but this too changed over time. Sharing my writing on Twitter eventually lead to many wonderful interactions and opportunities, but it also lead to a small number of hurtful interactions that still sting to this day.[1]

I’m not sure if it is Twitter’s manipulation of the content, or the anxiety surrounding the current pandemic, or a world too dominated by American politics and current events, but since the beginning of this year, many people who I enjoyed following on Twitter have been using it less and less. Some have stopped using it altogether. Others are using it differently, playing it safe, Tweeting announcements, but never their own thoughts or opinions. At the same time, many others have become more bold, Tweeting stronger, more emphatic, more inflammatory information. There seems to be more anxiety, judgement, intolerance, and hate, and much less curiosity, empathy, connection, and understanding, if there ever was any.

Perhaps this is the inevitable terminal-state of a social media platform, but I remain torn: is Twitter a net positive or a net negative in my life? This essay is my attempt to organize my thoughts across this spectrum.

We simply do not have the computational capacity to manage social relationships effectively beyond this size. This suggests that increasing the group size beyond this number will result in significantly less social stability, coherence, and connectivity, ultimately leading to disintegration.
—Geoffrey West discussing Dunbar’s numbers in “Scale: The Universal Laws of Life and Death in Organisms, Cities and Companies”

Love

The first thing that interested me in Twitter was the ability to do full-text, real-time search, a feature that does not require an account. It is easy to underappreciate how incredible this feature is: I can find first-hand information about a niche event, in near real-time, from people I have never met, anywhere in the world. Amazing. Unique in human history.

I remember being stuck in a tunnel on a subway train with the faint smell of smoke and finding updates from people on the affected subway car, long before there were any official reports from the transit authority or traditional news media. In addition, I played in a pipe band for many years, which certainly qualifies as a niche hobby. I regularly used Twitter’s search to get results from pipe band competitions around the world, from people attending the event, as the results were announced, well before official results were posted online, hours or days after the event.

Social media has really exposed how completely brilliant almost everyone in the world is on certain topics, and then how completely misinformed almost everyone in the world is on other topics.
Sarah Cone

Around the same time, I was working on publish-subscribe infrastructures for time-series data and events, infrastructure widely used in industrial automation, from manufacturing, to the process industries, to utilities.[2] The publish-subscribe subsystem was in-memory, server-centric, and built for consuming events, in near real-time, for a small number of event streams, like sensors. It was mainly used for updating time-series elements like graphs and tables on user interfaces used for monitoring, reporting, and process control.[3] I was involved in a development effort to scale the system to support millions of event streams, as well as architectures with clustered and federated servers. In addition, we wanted a common, durable journal for applications that required persisted, publish-subscribe messaging for data replication and change management.[4]

I was inspired by Twitter’s software architecture. Similar to these systems for industrial data, Twitter mixes the reliable query of immutable, historical events with a subscription to new events, as they happen. Twitter has a lot of skew where a small number of accounts can produce many Tweets, or have many followers, or both. Industrial data platforms are similar: many applications query for, or subscribe to, a small number of key performance indicators (KPIs) that update rapidly, like production throughput in manufacturing, or the frequency and load on a power grid. But just like Twitter, the vast majority of events are sampled and queried much less frequently. For example, some measurements are only used for annual environmental reporting or condition-based maintenance.

I remember reading Twitter’s Earlybird paper for insights on their architecture. The challenges of industrial computing environments are, in many ways, much more demanding than Twitter. For high-volume subscriptions, Twitter only needs to provide the impression of streaming every Tweet in near real-time, whereas in industrial environments, missing events, or out-of-order events, can be misleading, or even unacceptable and unsafe. On the other hand, Twitter’s real-time infrastructure was very reliable and immensely scalable, scalability and performance unprecedented for industrial software.

After finally creating a Twitter account in 2014, I used it as a microblogging platform—arguably the original purpose of the platform—to capture short thoughts, questions to ponder, quotes, and links to interesting articles or talks. In other words, ideas I wanted to come back to. Whenever I tweet, or re-tweet, I generally ask myself, “If I look back on my timeline a year or two from now, will this add value?” If the answer is no, I skip it. In response to a tweet, I don’t mind the odd question or comment, showing gratitude or expressing curiosity, but I generally don’t like Twitter as a place to have a long discussion, or a back-and-forth conversation. I don’t like how people use Twitter for long threads, rather than publishing an essay. There are a few people who can pull off tweeting something valuable more than a few times a week, but with the exception of a few accounts, for me, less is more.[5]

Twitter is a few people saying interesting things amidst a much larger number saying mean or mistaken things. So are books. But you don’t suddenly get sentences from bad books in the middle of reading one of the good ones.
Paul Graham

Many people criticise social medial as an echo chamber where people are only exposed to information that already appeals to their biases, but I don’t think this is true. I’m much more likely to be exposed to a diverse set of opinions and experiences following people from all over the world than I am from the smaller circle of direct family, or friends from high school, or colleagues at work.[6]

Paired with the writing on my blog, Twitter has been a wonderful outlet to meet people I never would have met otherwise. These are people I have enjoyed conversations with, a meal, a coffee, a beer, or even people that I’m grateful to now have as colleagues. Twitter has lead to speaking engagements in some of my favourite international cities.[7] Combined with the ability to follow what some of the best minds of our generation are reading, watching, thinking, and doing, it is hard to see Twitter as anything but a net positive.

Hate

I hate how Twitter steals my attention. I do not have notifications enabled for Twitter—I do not enable notifications on my phone for anything other than text messages or phone calls from people I know, or critical alerts for production services at work. But this doesn’t stop me from opening Twitter on a regular basis to reflexively check for something new. I regularly log out of Twitter on my phone to force myself to check it more intentionally on a computer,[8] but it remains tempting to sign-in first thing in the morning—I might see a Tweet linking to a really interesting article to read. Of course, I am just as likely to mindlessly scroll for thirty minutes, or start the day with nothing more than a vague anxiety about the state of the world.

In doing creative work, do not start your day with addictive time-vampires such as The New York Times, email, and Twitter. All scatter eye and mind, produce diverting vague anxiety, clutter short-term memory. Instead, begin right away with your work. Many creative workers have independently discovered this principle.
—Edward R. Tufte, The Future of Data Analysis

Social media platforms allow everyone to be both a producer and a consumer of information. This is often associated with an optimism about the fundamentally different opportunities it creates. We have, at least in theory, access to the same publishing platform as the most powerful and influential people in the world. But as Moxie Marlinspike notes in The New Yorker profile Taking Back Our Privacy, “What we didn’t necessarily anticipate, when everyone was so optimistic, was how little it would change things. The dream was always that, if someone in the suburbs of St. Louis got killed by a cop, immediately everyone would know about it. At the time, it was a sort of foregone conclusion that that would be enough.” In other words, just because we can immediately publish or consume pictures, text, and video, laced with outrage, it isn’t necessarily enough to prevent any of it from happening.

We have access to more information faster than ever, but perhaps very little utility comes from it beyond, as I noted earlier, getting updates on the smell of smoke in a subway car, or the results from the latest pipe band competition. We are in an information crisis, no better highlighted than by the current pandemic, where there is an abundance of information, yet it is very difficult to separate facts, from opinion, from speculation, from propaganda.

Successful cultures are those that excel in reproducing their memes, irrespective of the costs and benefits to their human hosts.
—Yuval Noah Harari, “Sapiens: A Brief History of Humankind”

Which brings me to the thing that I dislike the most about Twitter, the binary framing of right and wrong: the inability to unfold ideas with curiosity, empathy, compassion, or humour, combined with the self-righteous policing of ideas.[9] Policing tweets to try and eliminate harm or correct misinformation usually devolves into just another form of attack, and one that assumes the attacker is correct. Piling on when someone makes a mistake, or has an unpopular or dated position, can even amplify the original message. Instead of shaming or bullying, why not assume good intent, or have a quiet word, or simply ignore? Or explore the possibility of your own misinterpretation?

Robert M. Sapolsky in the book Behave: The Biology of Humans at Our Best and Worst, references Jennifer Jacquet’s work on shame, writing: “Amid the potential good that can come from such shaming, Jacquet also emphasizes the dangers of contemporary shaming, which is the savagery with which people can be attacked online and the distance such venom can travel—in a world we’re getting to anonymously hate the sinner seems more important than anything about this sin itself.”

The result is that it becomes very dangerous to ever be wrong, controversial, or out of step. The only way to put ideas forward is to align them with the accepted group-think of the day. Of course this was not invented by Twitter—we’ve seen the harm done by this type of environment in science, medicine, law, and engineering, among other pursuits, throughout human history. And it’s not even just tweets, since some people are happy to judge you harshly based on one or two of the accounts you follow, without considering that you might be following people with opinions entirely contrary to your own as a way to collect or test contrasting ideas. The same goes for an inadvertent like, or using likes as a bookmark, or reading too much into "unfollowing" someone.

If all of your beliefs line up into one political party, you’re not a clear thinker. If all of your beliefs are the same as your neighbours and your friends, you’re not a clear thinker. Your beliefs are socialized, they’re taken from other people. So, if you want to be a clear thinker, you cannot pay attention to politics. It will destroy your ability to think.
Naval Ravikant

Mike Solana in the essay Jump, an essay about what might happen in a world where millions of people can instantly react to speculative, incomplete, inaccurate, or malicious information, states, “Righteous anger is a powerful drug, and it clouds our judgment in relation to its scale; the more people there are around us shouting, the harder it is to think for oneself.” There is a real danger in large numbers of people acting rapidly and emotionally to information they just received. New information, by its very nature, is usually incomplete or inaccurate and is slowly refined over time. This is the incremental path toward the truth so cherished in journalism, science, and academia. But on Twitter, no one can afford to be wrong as there is rarely the space to consider, retract, or revise. Mike continues, “We need to name these concepts, we need to talk about them, and we need to make the act of calming down a cherished cultural institution.” Also adding, “Getting comfortable with being wrong would also help, as would expecting people around us to be wrong. This, by the way, is something that happens more than it doesn’t. People are constantly wrong. Stories are constantly corrected.”

As I’ve said before, I make no pretense what is posted here is true, since I’m convinced I do not know what truth is, and neither does anyone else. “Here is one way to think about it” is all that can be said. So I’ll continue, and hope you will as well.
Dee W. Hock

Tipping the Balance

I used to wonder why people were so much ruder on Twitter than real life. Then 2020 happened, and Twitter became real life. Now I think we’ll need a new social network that models physical-world levels of civility. Rebuild civilization online first, then offline.
Balaji S. Srinivasan

There is nothing very social about social media. Iain McGilchrist, in his book The Master and His Emissary, a book that I think will eventually be viewed as one of the most important books of a generation, explores how our brain, divided between the specialties of the left and right hemisphere, has influenced the arc of human history as the influence of the left and right hemispheres has shifted. Recently, the “rational” left hemisphere has been seen as superior to the right, but McGilchrist argues it is the right hemisphere that is more reliable and insightful and, without it, our world would be mechanistic, stripped of depth, value, and betweenness. He notes that empathic people mimic the facial expressions of those they are with and describes how brains only show activity in the hemisphere associated with empathy, the right hemisphere, when people are working with other humans, not computers.

In a book equally influential on my thinking, Shame and Pride: Affect, Sex, and the Birth of the Self, by Donald L. Nathanson, the author describes how these roots of empathy are developed starting moments after birth, well before we can ever talk or make complex connections in the cortex, by mirroring universal affects with our parents. In other words, empathy is grounded in the body and in our embodied experience with other people. McGilchrist notes, “Social isolation leads to exaggerated fear responses, violence and aggression, and violence and aggression often lead, in turn, to isolation. Structures which used to provide context from which life derived its meaning had been powerfully eroded. Facilitating irony, distance and cynicism at the expense of empathy.” Twitter in a nutshell.[10]

McGilchrist describes how a world dominated by the influence of the left hemisphere will have “admiration for what is powerful rather than beautiful, a sense of alienated objectivity rather than engagement or empathy, and an almost dogmatic trampling on all taboos.” He draws a clear distinction between knowing something, rather than experiencing something.[11] He believes alienation through shock and novelty will become defenses against the boredom and inauthenticity of modernity. In one of the final chapters, McGilchrist goes on speculate, based on first-principles, what a world dominated by the closed system of the left hemisphere would look like: “Philosophically, the world would be marked by fragmentation, appearing to its inhabitants as if a collection of bits and pieces apparently randomly thrown together; It’s organization, and therefore meaning, would come only through what we added to it, through systems designed to maximize utility.” Twitter, with its appeal to the internally consistent world of the left hemisphere, seems like the perfect tool: anonymity, virtualisation, organizability, abstraction, representation, categorization, certainty, manipulation, control, and justice reduced to mere equality.

Nathanson, in his book published in 1992, well before the advent of Twitter or the ubiquity of the Internet, predicted our behaviours in response to chronic shame: “We are moving more and more into a culture of explosion, as a huge and growing segment of our society has adopted the macho script, within which shame is converted to anger and fear to excitement.” Moments of competence engineered to produce pride and reduce chronic shame. He predicted disavowal of shame and a combination of attacking others and affect dyscontrol to diffuse shame.

Nathanson saw the world dividing roughly along two personality types: the humanist and the normative. “The humanist is more likely to smile, to enjoy the mutualization of positive affect, to experience more often the impediment of shame, and to tolerate or accept the affect of distress.” In other words, the humanist is not free of shame or negative affect, but the humanist knows how to identify the experience of shame and use it to draw attention to their own needs, and the needs of others. They can individuate and, at the same time, incorporate shame into a larger whole. In contrast, “The normative is far less likely to smile in public, will not tolerate shame (which is converted to dissmell and disgust towards others), and uses his anger as a stable force in interpersonal relationships.” Does anything sound more like Twitter?

Even though Twitter is described as a social media platform and it has the power to connect people through ideas from all across the globe, the experience is a normative one, not a humanist one. Any sense of connection is an illusion, because it is not an embodied connection.[12] As McGilchrist would certainly agree, tweets are all “re-presentations” of information—text, images, videos, links—all the domain of the left hemisphere, information lacking the context that comes from a broader, systemic, connected, and experienced view of the world.[13] Undoubtedly, Twitter will continue to evolve and efforts will be made to improve the platform,[14] but none of what I have just referenced provides much hope for social media in its current form.

Distrust essentialism. Keep in mind that what seems like rationality is often just rationalization, playing catch-up with subterranean forces that we never suspect. Focus on the larger, shared goals. Practice perspective taking. Individuate, individuate, individuate.
—Robert M. Sapolsky, “Behave: The Biology of Humans at Our Best and Worst”

I continue to use Twitter, but the balance is tipping. The people who brought me to Twitter are using it less, or not at all. After writing and reflecting on this essay, the only thing really keeping me on Twitter is the ability to follow the work of interesting people. As soon as I have another way to do this, a place to curate, accumulate, refine, and explore information, with genuine curiosity and respect, a place without macho exhibitionism and the need to be right, and a place to see the human side of people’s work and life, I will likely stop using Twitter. We need to look at the real world and not just re-presentations of it. We need forbearance: patient self-control, restraint, and tolerance. Assume good intentions and be nice to each other.


  1. I have only experienced a few incidents, but I have been attacked for what I’ve written on this blog, even by people who have never read the article; for quotes taken out of context; for my choice of employer; and for incorrect assumptions about my citizenship. I can only imagine the lasting impacts that come from the regular abuse faced by so many people on Twitter. ↩︎

  2. Some applications were similar to Grafana, but the updates were streamed to the client, on-change, rather than the client repeating database queries for the entire time window every few seconds. This approach is much more responsive and efficient for large datasets, especially in the constrained computing environments where these systems had their roots. ↩︎

  3. Listen to the podcast episode The Changing Landscape of Operational Technology for a history of these industrial software systems. The guest is a former colleague of mine. ↩︎

  4. Similar to Apache Kafka and other infrastructure available in modern platforms. I expanded on this subject in my article Shared-Nothing Architectures for Server Replication and Synchronization. ↩︎

  5. This doesn’t mean tweets need to be all-business. I like aesthetically pleasing photos of architecture and nature; I like to see where people are travelling and what projects they are working on; I enjoy seeing what people are eating or drinking; and I’m a sucker for a glimpse at the books people are reading. ↩︎

  6. See the Rationally Speaking podcast What the Internet Can Tell Us About Human Nature for speculation and research on this subject. ↩︎

  7. I’m very grateful to Michael Feathers for the opportunity to speak at QCon London 2017 based on an essay I wrote on Quality Views. It renewed my interest in speaking at conferences and has lead to many wonderful opportunities. ↩︎

  8. Even as I write this essay, at times I flip back and forth from my text editor to Twitter as I try and establish a state of flow. ↩︎

  9. Even in the binary world of computing where everything is a matter of ones and zeros, deciding what is “right” is not as straightforward as we often think. The differentiation between zero and one is done within a band and is only understood relative to the operating voltage of the hardware. Error detection in computer memory can sometimes only detect errors, not correct them, relying on the assumption that errors are independent. Statistical approaches are used for deciding that a networked agent has disappeared. Finally, see the talk Bitsquatting for a fun demonstration of what is possible when there are bit errors. ↩︎

  10. The left hemisphere needs certainty and needs to be right. The right hemisphere can hold several ambiguous possibilities in suspension without premature closure on one outcome, which is important for metaphor, irony, and humour. If you see a humourless person on Twitter, run! ↩︎

  11. As an example of the contrast between newness and novelty, and knowing rather than experiencing, McGilchrist points out that no one ever decided not to fall in love because it had been done before. As a good friend reminded me, common human experiences are deeply meaningful not despite but because they are common. ↩︎

  12. Many of the Twitter accounts I find most valuable are people I have met in person on one or more occasion. For people in the software industry, this has often been at in-person conferences, an experience that, in my opinion, cannot be replaced with virtual conferences. ↩︎

  13. The left hemisphere exemplifies the context-independent rationality achieved by the interchangeability that results from abstraction and categorization. On Twitter, people often categorize themselves and others. Just look at a few bios or how people show outrage. Reason, in contrast to rationality, requires context. Resist the temptation to label, group, or categorize. Instead, individuate. ↩︎

  14. Or, cynically, at least improve the number of people clicking on advertisements. Personally, I think advertisements are part of the problem. I would be quite happy to pay for a private, advertisement-free, Twitter-like platform where every member is invested, literally, in a better experience. ↩︎